Algorithms, Blockchain and Cloud

Improve Performance using Asynchronous Design (SteemIt)


laptop Improve Performance using Asynchronous Design (SteemIt)

laptop

I noticed for sometime that many of my online steemit tools were slow to give answers, especially if the result dataset contains many items. For example, more than 100 delegates Steem Power to myself and this online tool took a while to list the delegators.

I dig into the code and finally found out the converter from steem-python is slow.

steem = Steem(nodes = steem_nodes)
converter = Converter(steemd_instance = steem)
while some loop:
    r.append( "sp": converter.vests_to_sp(vests)})
The converter.vests_to_sp() is called for every single row in the data set and it is time consuming. Looking into the converter.py:

def steem_per_mvests(self):
        info = self.steemd.get_dynamic_global_properties()
        return (Amount(info["total_vesting_fund_steem"]).amount /
                (Amount(info["total_vesting_shares"]).amount / 1e6))

def vests_to_sp(self, vests):
        return vests / 1e6 * self.steem_per_mvests()

We can see that steem_per_mvests is time-consuming as it will need to get data from the steem blockchain using steemd object.

To improve the performance, we can cache the self.steem_per_mvests() for e.g. 1 hour. So we can write a cached version of vests_to_sp that converts VESTS to Steem Power:

import os

def file_get_contents(filename):
  with open(filename) as f:
    return f.read()

def vests_to_sp(vests):
  steem_per_mvests = 489.85031585637665
  fname = "cache/steem_per_mvests.txt"
  try:
    if os.path.isfile(fname):
      x = file_get_contents(fname).strip()
      if len(x) > 1:
        x = float(x)
        if x > 0:
          steem_per_mvests = x
  except:
    pass          
  return vests / 1e6 * steem_per_mvests      

The next thing is to write a script e.g. update_steem_per_mvests.py that will get the value from steem block chain and save it locally to a text file e.g. steem_per_mvests.txt

from steem.converter import Converter
from steem import Steem
from nodes import steem_nodes

def file_put_contents(filename, data):
  with open(filename, 'w') as f:
    f.write(data)

steem = Steem(nodes = steem_nodes)
converter = Converter(steemd_instance = steem)

x = converter.steem_per_mvests()
file_put_contents('cache/steem_per_mvests.txt', str(x))

You can then put this in crontab e.g.

@hourly python3 update_steem_per_mvests.py > /dev/null 2>&1

Getting data from block chain is slow, and we should really avoid that as much as we can. For exchange rates or something we don’t need 100% real time accuracy, we can always store the data locally or in the cache and let another script to update asynchronously at a interval. Using real time data is resource-intensive and we usually can achieve a more responsive system by asynchronous approach.

Another example: the @justyy voting bot is accelerated by using the cached list of delegators, which is updated regularly by another script. This makes the bot more responsive and of course less time per round voting.

You may also like: 用异步来提高性能 (SteemIt)

–EOF (The Ultimate Computing & Technology Blog) —

634 words
Last Post: Turtle Programming: While Loop, Do/Else Loop and Unit Tests Added
Next Post: Turtle Programming v0.0.12: Powerful For Loop, INC, DEC, and on NPM!

The Permanent URL is: Improve Performance using Asynchronous Design (SteemIt) (AMP Version)

Exit mobile version