TL;DR
We are faster, way faster. This allows you to have an order of magnitude shorter downtime when migrating your application from one infrastructure to another or to refresh your OLTP data in the Data Warehouse, so you can analyze the data quicker.Please check our extract and load benchmarks and replication benchmarks
Background
Since the advent of computers data has been a critical component of the infrastructure and technologies have evolved to store and retrieve data in a performant container while keeping costs low. We have seen the rise of RDBMS databases, NoSQL databases and now we are in the Cloud database era that promises infinite scalability and elasticity. Part of the problem is moving this data from one platform to another. Given that data volumes have exploded and applications have significantly shorter downtime SLAs, this problem has become the brunt of failures of most data migration projects.60% to 80% of data migration projects fail.
Data Migration
Consider a 100 GB table and a data migration solution that moves data at 10 MBPS (the average speed of other solutions), you will take approx 22 hours to transfer the data. If the rate is amped up to 100 MBPS, it will take slightly more than 2 hours to move the same amount of data. Now imagine if the rate is 1 GBPS (wirekite), you can do the migration in 20 minutes.The data transfer rate is not limited by the network but by the rate of data extraction, loading on source and target database.
