Cloud-Native & Elastic

GNES is all-in-microservice: encoder, indexer, preprocessor and router are all running statelessly and independently in their own containers. They collaborate under the orchestration of Docker Swarm/Kubernetes etc. Scaling, load-balancing, automated recovering come off-the-shelf in GNES.


Taking advantage of fast-evolving AI/ML/NLP/CV communities, we learn from best-of-breed deep learning models and plug them into GNES, making sure you always enjoy the state-of-the-art performance.


How long will it take to deploy a change that involves just changing the encoder from BERT to ELMO, or switching a layer in VGG? In GNES, this is just one line change in a YAML file. We abstract the encoding and indexing logic from the code to a YAML config, so that you can combine or stack encoders and indexers without even touching the codebase.

Generic & Universal

Searching for texts, images or even short-videos? Using Python/C/Java/Go/HTTP as the client? Doesn’t matter which content form you have or which language do you use, GNES can handle them all.

Model as Plugin

When built-in models do not meet your requirments, simply build your own with one Python file and one YAML file. No need to rebuilt GNES framework, as your models will be loaded as plugins and directly rollout online.

Best practice

We love to learn the best practice from the community, helping our GNES to achieve the next level of availability, resiliency, performance, and durability. If you have any ideas or suggestions, feel free to contribute.