This content originally appeared on DEV Community and was authored by Michal Moravik
Have you ever wanted to benchmark parts of your code but found that most libraries restrict you to the function scope level measuring (and nothing more granular)?
No? Well, we did at my workplace. To solve the issue, we leveraged the power of Python contexts. Check out the following example to understand what I mean:
def example_func():
with t('metric1'):
time.sleep(0.5)
with t('metric2'):
time.sleep(0.3)
# ... later in the code
m = t.add_total('total').metrics
print(m)
prints:
{
'metric1': {'start': 1656844808.09, 'end': 1656844808.59, 'interval': 0.5},
'metric2': {'start': 1656844808.59, 'end': 1656844808.89, 'interval': 0.3},
'total': {'start': 1656844808.09, 'end': 1656844808.89, 'interval': 0.8}
}
Using Python contexts (the "with" statements), your benchmarking scope is not limited to whole functions (in this case, example_func
), but you can indent specific parts of your code you'd like to measure.
It is a short but clever piece of code we call Trainer ⏱ that is now available to anyone on my GitHub and can be installed as a package using pip. Feel free to leave your star as you might need this in the future.
This content originally appeared on DEV Community and was authored by Michal Moravik
Michal Moravik | Sciencx (2022-07-04T21:53:37+00:00) Code Benchmarking in Python using Trainer ⏱. Retrieved from https://www.scien.cx/2022/07/04/code-benchmarking-in-python-using-trainer-%e2%8f%b1/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.