{"id":1044207,"date":"2021-03-29T00:00:00","date_gmt":"2021-03-29T00:00:00","guid":{"rendered":"https:\/\/www.beyondsoft.com\/sg\/customer-stories\/aws-data-migration-boosts-scalability-and-flexibility-for-global-ad-company\/"},"modified":"2021-03-29T00:00:00","modified_gmt":"2021-03-29T00:00:00","slug":"aws-data-migration-boosts-scalability-and-flexibility-for-global-ad-company","status":"publish","type":"customer-story","link":"https:\/\/www.beyondsoft.com\/jp\/en\/customer-stories\/aws-data-migration-boosts-scalability-and-flexibility-for-global-ad-company\/","title":{"rendered":"AWS data migration boosts scalability and flexibility for global ad company"},"content":{"rendered":"

THE CHALLENGE<\/strong><\/p>\n\n\n

IAS had decided to gradually migrate their full infrastructure to AWS, including a solution which helps customers track and optimize ad campaigns across publishers and partners such as Facebook. Core to this technology is the ability to collect, process, aggregate, and analyze ad impressions, clicks, views, and other events related to individual ad sessions.<\/p>\n\n\n

Over the years, their on-premise solution for ingesting ad events had become unwieldy to support and difficult to scale. The solution was not set up to work in the cloud. It needed to be refactored to simplify code management, take advantage of AWS native services, and perform optimally in the cloud. They needed coding and migration support from an AWS data migration veteran like Beyondsoft.<\/p>\n\n\n

THE SOLUTION<\/strong><\/p>\n\n\n

In less than six months, Beyondsoft updated and migrated the IAS solution to AWS. Prior to the migration, Beyondsoft refactored the solution\u2019s entire codebase from top to bottom and implemented a cloud-based ad event ingestion solution to support multiple partners, minimize operational costs and coding, and scale dynamically to accommodate sudden volume spikes.<\/p>\n\n\n

To streamline the code and enable scalability, Beyondsoft analyzed the logic for each partner, identifying shared code and pinpointing standard areas for partner-specific logic. This effort laid the foundation for a simpler, more transparent codebase that could be quickly scaled to new partners while being easier and more cost-effective to support in the long term. In addition, the refactoring also optimized the solution for the cloud and leveraging AWS capabilities.<\/p>\n\n\n

HOW IT WORKS<\/strong><\/p>\n\n\n

The comprehensive solution makes the most of AWS functionality while reacting dynamically with speed. To achieve optimal performance, it separates the ad events pipeline into two pieces: a receiver at the front end and a deliverer at the back end.<\/p>\n\n\n

For the front-end receiver service, a lightweight Node.js web-service behind Elastic Load Balancing (ELB) provides the HTTP API endpoints to which partners submit ad events. The application has a very short start-up time and scales automatically based on the incoming traffic volume. The application validates and converts partner-specific API calls to standard ad event records. It then passes these records to a background Kinesis Producer Library (KPL) process that buffers and pushes the records to a Kinesis Data Stream which smooths the shape of the incoming traffic, enabling downstream processes to work more efficiently. Amazon ELB and auto-scaling Amazon Elastic Container Service (ECS)\/AWS Fargate for the pipeline front-end enable low operational costs, while still being a serverless, easy-to-maintain solution.<\/p>\n\n\n

On the back end of the Kinesis Data Stream\u2014the deliverer service\u2014a Kinesis Client Library (KCL)-based application enriches ad event records with additional properties derived from external services, and stores the records in the Amazon S3\/AWS Glue Catalog-based data lake for further processing. Amazon Cloud Watch collects and surfaces pipeline metrics for dashboard monitoring, and alerts administrators when pipeline errors exceed specific thresholds.<\/p>\n\n\n

RESULTS<\/strong><\/p>\n\n\n