Reducing Complexity in OTC Clearing

We are developing solutions for the OTC Clearing market place that will lead to reductions in cost, complexity and operational inefficiencies for market participants.

MyCCP has a multi-tier architecture.

The front-end is currently a Desktop Application written in .Net WPF and C#. It is designed to work on Windows Desktop machines. Customers may develop their own front-end of choice as the entire service layer is exposed as a set of web apis (gRPC over HTTP).

The middle-tier is portable across Windows, Linux and Mac OSX and is built using Java and Spring Boot. The front-end and the middle-tier communicate using gRPC protocol, which is similar to a traditional RESTFul api, except that it is uses more efficient encoding of data going across the wire. The middle tier manages most of the transactional data such as trades and market data, and uses an SQL database to store data. Currently SQL Server and PostgreSQL are supported, with plans to add support for NoSQL databases. Middle-tier services are stateless. This makes it easy to load balance requests across multiple middle-tier servers.

The backend tier is written in C++. The backend consists of Risk Engines that are responsible for all the expensive computations. Additionally one of the backend servers hosts the Limit Checker which use a local RocksDB datastore for persistence. Only one instance of the Limit Checker can be active at any time due to the nature of the data it holds. All other risk engines are stateless computation engines that simply process requests that come from the middle-tier. The communication between middle-tier and the backend tier is over TCP/IP and uses protocol buffers. The backend tier uses libuv as its networking layer. As the risk engines are stateless any number of these can be deployed.

The middle-tier has the ability to publish events to Apache Kafka topics, so that applications that wish to do further processing of events occuring in the MyCCP system have an easy and scalable way of consuming these events. As an example, a message consumer is being implemented that consumes the events and publishes them to an Elastic Search store for easier access.

For Disaster Recovery, the SQL database needs to be replicated by the customer. Additionally at present the RocksDB data store used by the Limit Checker must be replicated at the storage level. We plan to add support for software based replication in the Limit Checker. In addition, we are looking at NoSQL databases that have built-in replication to provide greater flexibility for the customer.