Comments (3)
@alexcasalboni - thanks for the reply and thoughts.
Yes I agree that in most cases it probably doesn't matter when things average out. The reason I investigated is that
we've been trying to figure out the pros/cons of using ZIP vs Container plus we're also looking at running Rust lambdas both of which charge for the init phase.
For those that run "warm" it's maybe 1 in 10,000 invocations that are cold but I can imagine there are maybe less frequently called event driven lambdas that might have a higher rate of cold starts hence it's important.
But yes, I guess using Billed Duration is best as that afterall is the time you're being billed for.
I'll have a go at a PR - JS isn't my strong suite but you've got to learn sometime. :)
from aws-lambda-power-tuning.
Hi @NeilJed, thanks for reporting this!
I believe it's an interesting observation and it can impact cost considerations for functions that have a very high, non-free initialization time, depending on how you use Lambda Power Tuning.
For historical reasons (before 1ms-billing was introduced in late 2020), the tool used Duration because it represented invocation time much better than Billed Duration - which was rounded at 100ms intervals.
Today this isn't true anymore, but my intuition says the power-tuning impact should be fairly low. Because the "durations" are averaged and outliers are excluded by default (see here and here), using Duration should not have impacted power-tuning results considerably so far. Unless you use a very low num
, parallelInvocation=true
, discardTopBottom=0
, and the init time is considerably long & not free.
Let me explain 😄 (very long thinking-out-lout moment - skip to bottom for final results)
As pointed out in the article you shared:
The init phase gets two unthrottled vCPUs, even at very low memory configurations
So for the sake of power-tuning itself, the init duration does not depend on the memory/power configuration and someone might argue that Lambda Power Tuning shouldn't include init time in the analysis because 1/ it's power-independent, 2/ it's often free, and 3/ the tool excludes cold starts from the analysis by default anyway.
That said, when discardTopBottom=0
, cold starts are included in the analysis and using Billed Duration (instead of Duration) would indeed affect the analysis, causing the average duration to be higher. Please note this would happen for all power values, so I'd still expect the final power-tuning results to not be affected (e.g. if 512MB was the optimal value, it probably still is).
Another important parameter to keep in mind is parallelInvocation
. When false
, you only get one cold start per power value, so it's easy to discard with discardTopBottom>0
. When true
, you can experience multiple cold starts per power value. Since new Function Versions/Aliases are created each time you power-tune, the number of cold starts should be consistent across many power-tuning executions.
Basically, there are 8 major cases to consider.
If parallelInvocation=false
:
discardTopBottom>0
- all cold starts are ignored - no big deal (Duration ~= Billed Duration)discardTopBottom=0
and your init time is very short - no big dealdiscardTopBottom=0
and your init time is considerable but free - no big dealdiscardTopBottom=0
and your init time is considerable and not free- this is probably worth fixing because Duration << Billed Duration, but should have a minor impact on the reported average duration because you only get one cold start per power value
If parallelInvocation=true
:
discardTopBottom>0
- most cold starts are ignored - no big dealdiscardTopBottom=0
and your init time is very short - no big dealdiscardTopBottom=0
and your init time is considerable but free - no big dealdiscardTopBottom=0
and your init time is considerable and not free- this is definitely worth fixing because Duration << Billed Duration and it might have a major impact on the reported average duration
Apologies for the long reply, I needed to think out loud and confirm that the current implementation works fine for the largest majority of use cases.
To summarize, it totally makes sense to use Billed Duration in all situations 🚀
Will work on a PR to implement this (with a regex) asap 👌 Or do you feel like submitting a PR yourself?
from aws-lambda-power-tuning.
Closing this :)
from aws-lambda-power-tuning.
Related Issues (20)
- ResourceConflictException HOT 5
- VPC Support HOT 6
- Initializer Failure: ResourceConflictException HOT 3
- Power-tuning involving only cold starts HOT 9
- Mutliple execution with different payload HOT 5
- Add wait/delay between invocations for heavily rate limited downstream dependencies HOT 2
- About handlering all `num` of execution HOT 5
- Need to support Lambda version HOT 1
- Lambda layer usage HOT 2
- Add optional description field for output report HOT 2
- lumigo-cli no longer maintained HOT 2
- adding an option to get all the configurations results HOT 6
- NullReferenceException when testing function HOT 4
- Support for testing stream-based functions HOT 2
- Option to Consume Payload from SQS source? HOT 3
- Actual Payload Shared in Step Functions Console On Function Error HOT 8
- Add KMS permission(s) that may be optionally needed HOT 11
- cost discrepancy on small lambdas HOT 2
- Add logic to identify insufficient resources causing timeout HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aws-lambda-power-tuning.