Carefully nestled between TCell and Faction, the SendBird kiosk was bustling during AWS Summit SF. We had a lot of enthusiastic visitors. Likewise, the team was enthusiastic to introduce SendBird to the 8,700 developers, engineers, and business folks who attended.
Now SendBird finds itself in the happy lull after AWS Summit SF and anticipating AWS Summit Seoul, rushing into view on April 18th - 19th. Great engineers, great cloud infrastructure, great conversations. We look forward to it.
Bridging our two main offices, our attendance at both AWS Summit SF and Seoul forges our commitment to global conversations around cloud technology and our commitment to helping both newcomers and old-hats alike learn more about our services on AWS. And, of course, it shows our dedication to our product, SendBird, the all-in-one chat solution for your business applications and the world’s most scalable and reliable real-time chat infrastructure.
A centerpiece to SendBird’s participation in SF was the talk, “How SendBird built a serverless log processing pipeline in 1 week,” given by our very own VP of Engineering, Jin Ku. The turnout for the talk in SF was excellent. It was exciting to see the audience casually engage Jin after his talk and to see our presence grow from previous conferences like API World and Developer Week.
SendBird's log processing pipeline
Now, I know what you’re thinking. You’re utterly despondent that you missed Jin’s talk, right? Lucky for you, Jin Ku is taking his act on tour! Aherm. At least to AWS Summit Seoul.
So if you plan to attend AWS Summit Seoul, check out the talk’s abstract:
As a chat solution serving enterprises, SendBird performs load-tests for all its largest customers. In this session, the SendBird team demonstrates how they use Amazon Kinesis, Amazon S3, AWS Lambda and Amazon Athena to build a processing pipeline to save and analyze the results of a massive-scale load test within just a few days. They share mistakes and lessons as they expand this pipeline into day-to-day operations such as aggregating customer usage data for billing purposes and blocking malicious traffic with the help of AWS WAF.
Any engineer with interest in scaling their traffic to meet enterprise customers will learn how to use AWS to tackle an essential part of the process. Don’t be the sole engineer in Seoul to miss Jin’s talk.
So you may be wondering how we partner with AWS to deliver the world’s most scalable and reliable chat infrastructure to your app. The following sheds a little light on how we use AWS.
Our SDKs, among have other superpowers, connect your application to our proprietary backend infrastructure, which sits on AWS. This is where we handle all your real-time chat and messaging traffic to scale beyond a million concurrent connections per app.
Our websocket, worker and API servers are all Amazon EC2 and directly managed by SendBird.
We use Amazon’s managed DBs like Amazon Aurora and Amazon RDS. We like Aurora because it has triple redundancy and so, if one server goes down, there are 2 others to help back up your traffic. It also has high availability for spikes in traffic. With these products, we can be generously flexible to all the vicissitudes of your traffic.
We save all data to Amazon DB.
And, finally, we use Elasticache as our in-memory data store and cache to improve our real-time performance.
SendBird prides itself on supporting the APAC region’s chat and messaging. Our customers include GO-JEK, Traveloka, Tokopedia, Kookmin Bank, TMON, SSG, LG U+, and Nexon. To join the SendBird nest and become one of our valued customers, be sure to set up a meeting with APAC sales team during AWS Summit Seoul on April 18 - 19.