cropped-FaviconGraphenus.png

Graphenus follows a cash strategy white, transparency and flexibility to adapt to the different needs of the companies: from the component replacement until deployment in different types of environments.

Deployment models

  • Graphenus is containerisedThis allows it to be adapted to both on-premise and cloud environments with minimal adaptations.
  • It is also possible to deploy a shorter version Graphenus in constrained infrastructure environments.

Procurement

  • Graphenus allows access multiple data sources thanks to the connectors provided by Spark and Trino.
  • The integration of Kafka allows the incorporation of Graphenus in event-driven architectures, propagating information in real-time or near real-time.

Integration with other systems

  • Graphenus is fully apifiedThis facilitates integration with any external system.
  • All Graphenus components are decoupledThe new system will facilitate the evolution and replacement of the same.

Batch & Real Time

  • Graphenus supports traditional use cases (batch) as well as those requiring Real Time.
  • Business logic developed with Graphenus can be reused in both types of processing.

Infrastructure needs

  • Graphenus is adaptable to both infrastructure environments limited as well as clusters formed by hundreds of servers.
  • It is possible to from a reduced computational scheme and scale based on the evolving needs of companies.

Multiple deployment options and plug&play mechanisms to adapt the platform to the specific casuistry of each company.

Graphenus is fully adaptable to different infrastructure scenarios and different types of use cases:

  • Graphenus runs on containers (Docker), facilitating multiple alternatives deployment: 
    • On Premise
    • Cloud Vendors (ARSYS, AWS, Azure, Google)
    • Hybryd

 

  • Ease of admission new storage formats (Delta Lake, Apache Iceberg...)

 

  • Graphenus facilitates the execution of processes in the batch and streaming.

 

  • Graphenus has at its disposal multitude of connectors with different data sources, including event ingestion using Kafka.