Subscribe
Subscribe to Email updates

Please use a valid email address.

SendBird's
Privacy Policy.
Search
Extreme Optimization of AsyncStorage in React Native
Share

Extreme Optimization of AsyncStorage in React Native

Apr 03, 2019

 AsyncStorage is a unique built-in feature for storing data in React Native and a good way to store simple key-value data. To make the SendBird SDK less dependent on other packages we considered using AsyncStorage as the main store but we were frustrated by the lagging performance.


What's the problem with AsyncStorage?

Figure 1 displays results from a performance test of AsyncStorage after reading and writing 2,000 items with AsyncStorage. We conducted the test on a Google Pixel 2 XL.

Figure 1. AsyncStorage read/write test (10 times)

Figure 1. AsyncStorage read/write test (10 times)

For comparison, Figure 2 shows the results of the same test with localStorage in a mobile Chrome browser on a Google Pixel 2 XL.

Figure 2. LocalStorage read/write test (10 times)

Figure 2. LocalStorage read/write test (10 times)

AsyncStorage takes about 12x more time to read/write on average compared to localStorage. This article introduces a few techniques to optimize AsyncStorage so that it could be improved such that it could be used in production.

How can you improve AsyncStorage?

Optimization usually has a trade-off. This article assumes that time is more valuable than memory. Under this assumption, AsyncStorage could be improved in the following ways:

  • Group items into a block for less disk operation
  • Batch write
  • Dump Promise
  • Memory caching

Group items into a block for less disk write

The first optimization trick is to group items as a single block (i.e. store several items in a single AsyncStorage object which is called as "block"). Each block has a limit to the number of items it can store. See Figure 3.

Figure 3. shows the structure of blocks and items in block manager.

Figure 3. shows the structure of blocks and items in block manager. Each block has many items in it. Each block has a key. And block manager points to the last unfilled block.

Block manager is a global manager to manage blocks. Since block manager “manages” most of the block operations, it’s a useful tool here. It can assign a new block, find a block, or update a block. It holds the current block object and the cursor (the index of the current block). If the current block is full, the block manager assigns a new block to store the additional items. When a new block is assigned, the current block would be switched to the new one and the cursor progresses.

During block-item mapping, it’s difficult to locate an item by its key because items are stored by block instead of key. Block manager contains a key-blockKey map so that users can find a block by its key.

AsyncStorage holds blocks and block metadata such as cursor, count (the total number of items), and key-blockKey map. By applying this optimization trick, 1,000 AsyncStorage items could shrink to 11(10 blocks + 1 block metadata) if the block size is 100.

Batch write

Once we group items into a block, however, a serious problem arises. Since the block manager writes block metadata so often, it will freeze the whole system with too many I/O operations.

Batch writing several requests into one I/O operation can solve this problem. It pushes each write request into a batch write queue and flushes the queue after. Figure 4 demonstrates how batch write works.

The batch write process

 Figure 4. The batch write process

In Figure 4, item 1, 2, and 3 change a variable number of times across the batch write listening period - item 1 changes three times, item 2 twice, and item 3 changes once. After the listening period, the batch write process collects the most recent version of each item and writes them. By doing this, 6 write requests compress into 3 write operations. By adopting batch write, the block metadata write operation becomes much faster.

Dump Promise

The Promise pattern is another main cause of performance drawbacks to using AsyncStorage. According to our experimental control, we found that using Promise is costly compared to not using it. Our experiment shows that Promise leads to slower processing times even when the process doesn't involve I/O operations. After purging Promise from the implementation and, instead, using callback, we achieved a 10-12x performance boost overall.

Memory caching

Caching in memory is a common approach to improve the performance of data store because memory is much faster than disk. It makes sense, therefore, to cache block metadata and the blocks in memory as well. But since memory size is limited and cache cannot hold everything in memory, evacuating obsolete memory space is a must.

One simple way to accomplish this is to limit the number of items in memory, like LRU cache. Once the number of items reaches the memory limit, the caching process clears obsolete items from memory.

Clearing obsolete items from memory should be enough if there’s plenty of memory. But for those who need more optimization to support old sluggish devices, it’s a good practice to apply self-destructive memory allocation.

Basically, self-destructing memory sets an expiration for each memory allocation and then releases it when it expires. This may impede performance slightly in certain conditions because it runs a timer for each memory allocation, but it also improves the amount of memory available for us and, so, increases the flexibility of its allocation.

Results - Write improves 46x; Read improves 37x

By implementing the optimizations above, the performance of AsyncStorage improves significantly. Figure 5 shows the results of a read/write test after optimizing AsyncStorage with the suggestions in this article.

Figure 5. Optimized AsyncStorage read/write test (10 times)

Figure 5. Optimized AsyncStorage read/write test (10 times)

We ran the test with a block size of 100 and a batch write interval of 300ms. We used the same device across all test. The bottom line is

  1. Write operation gets x46 boost
  2. Read operations about x37

Wow. We’ve also tested with other storage engines like SQLite and Realm, and, still, the optimized AsyncStorage performs better.

Resolving other potential issues

A potential issue can come up, if an interrupt happens during the I/O transaction. Imagine that the user closes the browser when AsyncStorage writes the block metadata but not the blocks. In such a case, adding support to the start and the end of the transaction would resolve the problem. Before one starts writing, store the request queue as a transaction and then clear it on completion. If the transaction queue remains due to an interruption, load the queue so that next I/O transaction can cover it.

Conclusion

Our research shows that developers can optimize AsyncStorage to be fast enough for production by creating faster I/O operations. In fact, it can be improved to beat even localStorage.

 SendBird's team is growing. Join us!

We're Hiring!
Help SendBird build the world's no. 1 messaging platform
We're Hiring!
Help SendBird build the world's no. 1 messaging platform
Related articles
Migrating chat made easy with Sync Server
Introduction Part of the challenge of migrating from one chat provider to another is not having a live migration solution ready. Building a live migration solution can be cost
ROMMEL SUNGA
Solutions Engineer
Don't try this at home: Why software engineers shouldn't build chat in-house
Before I helped co-found SendBird as the Chief Technology Officer, I built chat as a feature for two products. Both were consumer apps for families that required a real-time s
HARRY KIM
Chief Technology Officer
How to build an Android chat app using a chat API, Part 2: Channel List
1.0 Introduction In our earlier tutorial, we covered how to create a basic chat application that allowed you to login to SendBird with a user and to chat in a preset channel.
ROMMEL SUNGA
Solutions Engineer
© SendBird 2019. All rights reserved.
Follow us