AsyncStorage is a unique built-in feature for storing data in React Native and a good way to store simple key-value data. To make the SendBird SDK less dependent on other packages we considered using AsyncStorage as the main store but we were frustrated by the lagging performance.
Figure 1 displays results from a performance test of AsyncStorage after reading and writing 2,000 items with AsyncStorage. We conducted the test on a Google Pixel 2 XL.
Figure 1. AsyncStorage read/write test (10 times)
For comparison, Figure 2 shows the results of the same test with localStorage in a mobile Chrome browser on a Google Pixel 2 XL.
Figure 2. LocalStorage read/write test (10 times)
AsyncStorage takes about 12x more time to read/write on average compared to localStorage. This article introduces a few techniques to optimize AsyncStorage so that it could be improved such that it could be used in production.
Optimization usually has a trade-off. This article assumes that time is more valuable than memory. Under this assumption, AsyncStorage could be improved in the following ways:
The first optimization trick is to group items as a single block (i.e. store several items in a single AsyncStorage object which is called as "block"). Each block has a limit to the number of items it can store. See Figure 3.
Figure 3. shows the structure of blocks and items in block manager. Each block has many items in it. Each block has a key. And block manager points to the last unfilled block.
Block manager is a global manager to manage blocks. Since block manager “manages” most of the block operations, it’s a useful tool here. It can assign a new block, find a block, or update a block. It holds the current block object and the cursor (the index of the current block). If the current block is full, the block manager assigns a new block to store the additional items. When a new block is assigned, the current block would be switched to the new one and the cursor progresses.
During block-item mapping, it’s difficult to locate an item by its key because items are stored by block instead of key. Block manager contains a key-blockKey map so that users can find a block by its key.
AsyncStorage holds blocks and block metadata such as cursor, count (the total number of items), and key-blockKey map. By applying this optimization trick, 1,000 AsyncStorage items could shrink to 11(10 blocks + 1 block metadata) if the block size is 100.
Once we group items into a block, however, a serious problem arises. Since the block manager writes block metadata so often, it will freeze the whole system with too many I/O operations.
Batch writing several requests into one I/O operation can solve this problem. It pushes each write request into a batch write queue and flushes the queue after. Figure 4 demonstrates how batch write works.
Figure 4. The batch write process
In Figure 4, item 1, 2, and 3 change a variable number of times across the batch write listening period - item 1 changes three times, item 2 twice, and item 3 changes once. After the listening period, the batch write process collects the most recent version of each item and writes them. By doing this, 6 write requests compress into 3 write operations. By adopting batch write, the block metadata write operation becomes much faster.
The Promise pattern is another main cause of performance drawbacks to using AsyncStorage. According to our experimental control, we found that using Promise is costly compared to not using it. Our experiment shows that Promise leads to slower processing times even when the process doesn't involve I/O operations. After purging Promise from the implementation and, instead, using callback, we achieved a 10-12x performance boost overall.
Caching in memory is a common approach to improve the performance of data store because memory is much faster than disk. It makes sense, therefore, to cache block metadata and the blocks in memory as well. But since memory size is limited and cache cannot hold everything in memory, evacuating obsolete memory space is a must.
One simple way to accomplish this is to limit the number of items in memory, like LRU cache. Once the number of items reaches the memory limit, the caching process clears obsolete items from memory.
Clearing obsolete items from memory should be enough if there’s plenty of memory. But for those who need more optimization to support old sluggish devices, it’s a good practice to apply self-destructive memory allocation.
Basically, self-destructing memory sets an expiration for each memory allocation and then releases it when it expires. This may impede performance slightly in certain conditions because it runs a timer for each memory allocation, but it also improves the amount of memory available for us and, so, increases the flexibility of its allocation.
By implementing the optimizations above, the performance of AsyncStorage improves significantly. Figure 5 shows the results of a read/write test after optimizing AsyncStorage with the suggestions in this article.
Figure 5. Optimized AsyncStorage read/write test (10 times)
We ran the test with a block size of 100 and a batch write interval of 300ms. We used the same device across all test. The bottom line is
Wow. We’ve also tested with other storage engines like SQLite and Realm, and, still, the optimized AsyncStorage performs better.
A potential issue can come up, if an interrupt happens during the I/O transaction. Imagine that the user closes the browser when AsyncStorage writes the block metadata but not the blocks. In such a case, adding support to the start and the end of the transaction would resolve the problem. Before one starts writing, store the request queue as a transaction and then clear it on completion. If the transaction queue remains due to an interruption, load the queue so that next I/O transaction can cover it.
Our research shows that developers can optimize AsyncStorage to be fast enough for production by creating faster I/O operations. In fact, it can be improved to beat even localStorage.
SendBird's team is growing. Join us!