cropped-FaviconGraphenus.png

Graphenus follows a cash strategy white, transparency and flexibility to adapt to the different needs of the companies: from the component replacement until deployment in different types of environments.

Deployment models

  • Graphenus is containerised, allowing it to be adapted to both on-premise and cloud environments with minimal customisation.

     

  • It is also possible to deploy a reduced version of Graphenus in limited infrastructure environments.

Procurement

  • Graphenus allows access to multiple data sources thanks to the connectors provided by Spark and Trino.

 

  • The integration of Kafka allows the incorporation of Graphenus into event-driven architectures, propagating information in real-time or near real-time.

Integration with other systems

  • Graphenus is fully unified, which facilitates integration with any external system.

     

  • All Graphenus components are decoupled, facilitating their evolution and replacement.

Batch & Real Time

  • Graphenus supports traditional (batch) use cases as well as those requiring Real Time.

     

  • Business logic developed with Graphenus can be reused in both types of processing.

Infrastructure needs

  • Graphenus adapts to both limited infrastructure environments and clusters consisting of hundreds of servers.

     

  • It is possible to start from a reduced computational scheme and scale up based on the evolving needs of the companies.

Multiple deployment options and plug&play mechanisms to adapt the platform to the specific casuistry of each company.

Graphenus is fully adaptable to different infrastructure scenarios and different types of use cases:

  • Graphenus runs in containers (Docker), facilitating multiple deployment alternatives:
    • On Premise
    • Cloud Vendors (ARSYS, AWS, Azure, Google)
    • Hybryd

 

  • Ease of support for new storage formats (Delta Lake, Apache Iceberg...)

 

  • Graphenus facilitates the execution of processes in batch and streaming mode.

 

  • Graphenus has a multitude of connectors to different data sources, including event ingestion using Kafka®.