Comments (21)
Hi @bincykp
The timestamp:last was generated by stats/kmsg
or stats/dmesg
, there should be a "boot_failures " or other kernel error in lkp ncompare result if kernel boot failed, and you also can check other testsuite's result to analyze the kernel performance.
For example:
# ./lkp-tests/stats/kmsg kmsg
timestamp:last: 672776.31443
from lkp-tests.
Thanks for the reply.To better understand this I am attaching the ncompare result of 4.15.0-25 vs 4.15.0-65.Can you please have a look and comment on the result(
4.15.0-29vs65.txt
url)
from lkp-tests.
Also I like to know how I can find out which git push caused the performance delay
from lkp-tests.
Hi @bincykp
The timestamp:last was generated by
stats/kmsg
orstats/dmesg
, there should be a "boot_failures " or other kernel error in lkp ncompare result if kernel boot failed, and you also can check other testsuite's result to analyze the kernel performance.For example:
# ./lkp-tests/stats/kmsg kmsg timestamp:last: 672776.31443
So I understand dmesg timestamp:list value as such not giving any performance comparison value
from lkp-tests.
Hi,
Can somebody help me to answer above queries.
Thanks
Bincy
from lkp-tests.
Thanks for the reply.To better understand this I am attaching the ncompare result of 4.15.0-25 vs 4.15.0-65.Can you please have a look and comment on the result(
4.15.0-29vs65.txturl)
You can focus on the lines under "%stddev", e.g. vm-scalability.* since it's a vm-scalability performance test.
from lkp-tests.
Also I like to know how I can find out which git push caused the performance delay
I'm not sure what do you mean exactly, but lkp-tests shouldn't cause performance delay.
from lkp-tests.
Hi @bincykp
The timestamp:last was generated bystats/kmsg
orstats/dmesg
, there should be a "boot_failures " or other kernel error in lkp ncompare result if kernel boot failed, and you also can check other testsuite's result to analyze the kernel performance.
For example:# ./lkp-tests/stats/kmsg kmsg timestamp:last: 672776.31443
So I understand dmesg timestamp:list value as such not giving any performance comparison value
Yes, you are right, it's not a performance value. the performance value should be under the "%stddev" line.
from lkp-tests.
Also I like to know how I can find out which git push caused the performance delay
I'm not sure what do you mean exactly, but lkp-tests shouldn't cause performance delay.
Hi,
As per the attached pdf document we can keep the linux source file in /c/repo/linux.
lkp ncompare should point to which git commit id caused the performance delay.
I ran 'lkp ncompare' multiple times and not observed any pointers to code.Please suggest
lkp-tests.pdf
from lkp-tests.
Thanks for the reply.To better understand this I am attaching the ncompare result of 4.15.0-25 vs 4.15.0-65.Can you please have a look and comment on the result(
4.15.0-29vs65.txt
url)You can focus on the lines under "%stddev", e.g. vm-scalability.* since it's a vm-scalability performance test.
Sorry to ask repeatedly but its not convincing me.standard deviation for the vm-scalablity values are empty here.So should I understand vm-scalability performance is same on both systems.At the same time number of fail differ in both systems.
4.15.0-29-generi 4.15.0-65-generic
fail:runs %reproduction fail:runs
| | |
3250:1 -281381% 436:1 dmesg.timestamp:last
3250:1 -281381% 436:1 kmsg.timestamp:last
1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit
10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit
9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1.80 +15.7% 2.08 vm-scalability.free_time
172236 +10.9% 190964 vm-scalability.median
0.00 -12.7% 0.00 vm-scalability.stddev
688860 +10.8% 763293 vm-scalability.throughput
304.41 -0.2% 303.73 vm-scalability.time.elapsed_time
304.41 -0.2% 303.73 vm-scalability.time.elapsed_time.max
3934 -40.3% 2350 vm-scalability.time.file_system_inputs
16.00 +0.0% 16.00 vm-scalability.time.file_system_outputs
48911 -14.0% 42064 vm-scalability.time.involuntary_context_switches
13.00 -61.5% 5.00 vm-scalability.time.major_page_faults
4104 -0.7% 4076 vm-scalability.time.maximum_resident_set_size
3.532e+08 +17.8% 4.159e+08 vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
391.00 +1.0% 395.00 vm-scalability.time.percent_of_cpu_this_job_got
358.54 +31.5% 471.54 vm-scalability.time.system_time
833.97 -12.4% 730.47 vm-scalability.time.user_time
210.00 -8.1% 193.00 vm-scalability.time.voluntary_context_switches
2.07e+08 +10.8% 2.293e+08 vm-scalability.workload
from lkp-tests.
Also I like to know how I can find out which git push caused the performance delay
I'm not sure what do you mean exactly, but lkp-tests shouldn't cause performance delay.
Hi,
As per the attached pdf document we can keep the linux source file in /c/repo/linux.
lkp ncompare should point to which git commit id caused the performance delay.
I ran 'lkp ncompare' multiple times and not observed any pointers to code.Please suggest
lkp-tests.pdf
the "git commit id" means kernel's commit, you can install the kernel from source code, than run lkp-tests to compare the performance between difference kernel commits.
from lkp-tests.
Thanks for the reply.To better understand this I am attaching the ncompare result of 4.15.0-25 vs 4.15.0-65.Can you please have a look and comment on the result(
4.15.0-29vs65.txt
url)You can focus on the lines under "%stddev", e.g. vm-scalability.* since it's a vm-scalability performance test.
Sorry to ask repeatedly but its not convincing me.standard deviation for the vm-scalablity values are empty here.So should I understand vm-scalability performance is same on both systems.At the same time number of fail differ in both systems.
4.15.0-29-generi 4.15.0-65-generic
fail:runs %reproduction fail:runs | | | 3250:1 -281381% 436:1 dmesg.timestamp:last 3250:1 -281381% 436:1 kmsg.timestamp:last 1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry 10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry 0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit 10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry 0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit 9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry %stddev %change %stddev \ | \ 1.80 +15.7% 2.08 vm-scalability.free_time 172236 +10.9% 190964 vm-scalability.median 0.00 -12.7% 0.00 vm-scalability.stddev 688860 +10.8% 763293 vm-scalability.throughput 304.41 -0.2% 303.73 vm-scalability.time.elapsed_time 304.41 -0.2% 303.73 vm-scalability.time.elapsed_time.max 3934 -40.3% 2350 vm-scalability.time.file_system_inputs 16.00 +0.0% 16.00 vm-scalability.time.file_system_outputs 48911 -14.0% 42064 vm-scalability.time.involuntary_context_switches 13.00 -61.5% 5.00 vm-scalability.time.major_page_faults 4104 -0.7% 4076 vm-scalability.time.maximum_resident_set_size
3.532e+08 +17.8% 4.159e+08 vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
391.00 +1.0% 395.00 vm-scalability.time.percent_of_cpu_this_job_got
358.54 +31.5% 471.54 vm-scalability.time.system_time
833.97 -12.4% 730.47 vm-scalability.time.user_time
210.00 -8.1% 193.00 vm-scalability.time.voluntary_context_switches
2.07e+08 +10.8% 2.293e+08 vm-scalability.workload
"stddev" is calculated by the results on the same system with same kernel, the "stddev" should be 0 if "runs" is 1, you can run lkp-tests more times on the same test environment.
from lkp-tests.
the "git commit id" means kernel's commit, you can install the kernel from source code, than run lkp-tests to compare the performance between difference kernel commits.
I did that..
from lkp-tests.
Thanks for the reply.To better understand this I am attaching the ncompare result of 4.15.0-25 vs 4.15.0-65.Can you please have a look and comment on the result(
4.15.0-29vs65.txt
url)You can focus on the lines under "%stddev", e.g. vm-scalability.* since it's a vm-scalability performance test.
Sorry to ask repeatedly but its not convincing me.standard deviation for the vm-scalablity values are empty here.So should I understand vm-scalability performance is same on both systems.At the same time number of fail differ in both systems.
4.15.0-29-generi 4.15.0-65-genericfail:runs %reproduction fail:runs | | | 3250:1 -281381% 436:1 dmesg.timestamp:last 3250:1 -281381% 436:1 kmsg.timestamp:last 1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry 10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry 0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit 10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry 0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit 9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry %stddev %change %stddev \ | \ 1.80 +15.7% 2.08 vm-scalability.free_time 172236 +10.9% 190964 vm-scalability.median 0.00 -12.7% 0.00 vm-scalability.stddev 688860 +10.8% 763293 vm-scalability.throughput 304.41 -0.2% 303.73 vm-scalability.time.elapsed_time 304.41 -0.2% 303.73 vm-scalability.time.elapsed_time.max 3934 -40.3% 2350 vm-scalability.time.file_system_inputs 16.00 +0.0% 16.00 vm-scalability.time.file_system_outputs 48911 -14.0% 42064 vm-scalability.time.involuntary_context_switches 13.00 -61.5% 5.00 vm-scalability.time.major_page_faults 4104 -0.7% 4076 vm-scalability.time.maximum_resident_set_size
3.532e+08 +17.8% 4.159e+08 vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
391.00 +1.0% 395.00 vm-scalability.time.percent_of_cpu_this_job_got
358.54 +31.5% 471.54 vm-scalability.time.system_time
833.97 -12.4% 730.47 vm-scalability.time.user_time
210.00 -8.1% 193.00 vm-scalability.time.voluntary_context_switches
2.07e+08 +10.8% 2.293e+08 vm-scalability.workload"stddev" is calculated by the results on the same system with same kernel, the "stddev" should be 0 if "runs" is 1, you can run lkp-tests more times on the same test environment.
So same question....how to compare different kernel version with this result?
Thanks
from lkp-tests.
Thanks for the reply.To better understand this I am attaching the ncompare result of 4.15.0-25 vs 4.15.0-65.Can you please have a look and comment on the result(
4.15.0-29vs65.txt
url)You can focus on the lines under "%stddev", e.g. vm-scalability.* since it's a vm-scalability performance test.
Sorry to ask repeatedly but its not convincing me.standard deviation for the vm-scalablity values are empty here.So should I understand vm-scalability performance is same on both systems.At the same time number of fail differ in both systems.
4.15.0-29-generi 4.15.0-65-genericfail:runs %reproduction fail:runs | | | 3250:1 -281381% 436:1 dmesg.timestamp:last 3250:1 -281381% 436:1 kmsg.timestamp:last 1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry 10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry 0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit 10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry 0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit 9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry %stddev %change %stddev \ | \ 1.80 +15.7% 2.08 vm-scalability.free_time 172236 +10.9% 190964 vm-scalability.median 0.00 -12.7% 0.00 vm-scalability.stddev 688860 +10.8% 763293 vm-scalability.throughput 304.41 -0.2% 303.73 vm-scalability.time.elapsed_time 304.41 -0.2% 303.73 vm-scalability.time.elapsed_time.max 3934 -40.3% 2350 vm-scalability.time.file_system_inputs 16.00 +0.0% 16.00 vm-scalability.time.file_system_outputs 48911 -14.0% 42064 vm-scalability.time.involuntary_context_switches 13.00 -61.5% 5.00 vm-scalability.time.major_page_faults 4104 -0.7% 4076 vm-scalability.time.maximum_resident_set_size
3.532e+08 +17.8% 4.159e+08 vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
391.00 +1.0% 395.00 vm-scalability.time.percent_of_cpu_this_job_got
358.54 +31.5% 471.54 vm-scalability.time.system_time
833.97 -12.4% 730.47 vm-scalability.time.user_time
210.00 -8.1% 193.00 vm-scalability.time.voluntary_context_switches
2.07e+08 +10.8% 2.293e+08 vm-scalability.workload"stddev" is calculated by the results on the same system with same kernel, the "stddev" should be 0 if "runs" is 1, you can run lkp-tests more times on the same test environment.
So same question....how to compare different kernel version with this result?
Thanks
Is it directly with change value..?Then why this session is seperated ?
fail:runs %reproduction fail:runs
| | |
3250:1 -281381% 436:1 dmesg.timestamp:last
3250:1 -281381% 436:1 kmsg.timestamp:last
1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit
10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit
9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry
Is these values have any particular significance?
from lkp-tests.
how to compare different kernel version with this result?
We collected some common performance stats in index-perf-all.yaml . e.g.
688860 +10.8% 763293 vm-scalability.throughput
means there is 10.8% performance improvement on test vm-scalability.
1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit
10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit
9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry
Is these values have any particular significance?
These values are for debugging, the perf results can help finding the root cause of performance change.
from lkp-tests.
688860 +10.8% 763293 vm-scalability.throughput
means there is 10.8% performance improvement on test vm-scalability.
Thanks.
688860 +10.8% 763293 vm-scalability.throughput means there is 10.8% performance improvement on test vm-scalability on the 4.15.0-65-generic kernel compared to 4.15.0-29-generic.
What does this section means. Especially what does %reproduction means?
fail:runs %reproduction fail:runs
| | |
3250:1 -281381% 436:1 dmesg.timestamp:last
3250:1 -281381% 436:1 kmsg.timestamp:last
1:1 279% 4:1 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
10:1 347% 14:1 perf-profile.calltrace.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.children.cycles-pp.error_exit
10:1 347% 14:1 perf-profile.children.cycles-pp.error_entry
0:1 27% 0:1 perf-profile.self.cycles-pp.error_exit
9:1 68% 9:1 perf-profile.self.cycles-pp.error_entry
from lkp-tests.
"fail:runs %reproduction fail:runs" contains some stats related with boot, you don't need to care about them if boot success.
here is example to show the "reproduction", it means the relationship between "fail" and "runs" , but the data you pasted shows that some fail values are not the real fail but only the result of these items. so the "reproduction" is meaningless for them.
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 50% 2:4 dmesg.WARNING:at_ip__fsnotify_parent/0x
:4 25% 1:4 dmesg.WARNING:at_ip__x64_sys_io_submit/0x
:4 25% 1:4 dmesg.WARNING:at_ip_aio_read/0x
1:4 -25% :4 dmesg.WARNING:at_ip_io_submit_one/0x
from lkp-tests.
"fail:runs %reproduction fail:runs" contains some stats related with boot, you don't need to care about them if boot success.
here is example to show the "reproduction", it means the relationship between "fail" and "runs" , but the data you pasted shows that some fail values are not the real fail but only the result of these items. so the "reproduction" is meaningless for them.
---------------- --------------------------- fail:runs %reproduction fail:runs | | | :4 50% 2:4 dmesg.WARNING:at_ip__fsnotify_parent/0x :4 25% 1:4 dmesg.WARNING:at_ip__x64_sys_io_submit/0x :4 25% 1:4 dmesg.WARNING:at_ip_aio_read/0x 1:4 -25% :4 dmesg.WARNING:at_ip_io_submit_one/0x
Thanks.Little more explanation please.
'the data you pasted shows that some fail values are not the real fail but only the result of these items" What does these items means?Also if it shows like this what is the reliability of these results?
And what does -25% in %reproduction mean?
from lkp-tests.
these dmesg items are from dmesg.json file, and dmesg.json was generated by dmesg, the lkp ncompare tool shows these results in a different way, and you also can read the raw data to understand these results.
{
"dmesg.boot_failures": [
1
],
"dmesg.WARNING:at_ip__x64_sys_io_submit/0x": [
1
],
"dmesg.WARNING:stack_recursion": [
1
],
"dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x": [
1
],
"dmesg.timestamp:last": [
185.401124
],
"dmesg.timestamp:WARNING:at_ip__x64_sys_io_submit/0x": [
180.654425
],
"dmesg.timestamp:WARNING:stack_recursion": [
185.401115
],
"dmesg.timestamp:WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x": [
185.401124
]
}
:4 25% 1:4 dmesg.WARNING:at_ip_aio_read/0x
1:4 -25% :4 dmesg.WARNING:at_ip_io_submit_one/0x
-25% in %reproduction means the ratio of fails.
from lkp-tests.
Thanks
from lkp-tests.
Related Issues (20)
- stats/kernel-selftests: resolve rubocop issues of VmStater class HOT 3
- stats/kernel-selftests: resolve rubocop issues of X86Stater class HOT 1
- stats/kernel-selftests: resolve rubocop issues of ResctrlStater class HOT 1
- stats/kernel-selftests: resolve rubocop issues of CapabilitiesStater class HOT 4
- stats/kernel-selftests: resolve rubocop issues of KmodStater class HOT 1
- stats/kernel-selftests: resolve rubocop issues of NetfilterStater class HOT 1
- stats/kernel-selftests: resolve rubocop issues of BpfStater class
- stats/kernel-selftests: resolve rubocop issues of VmallocStater class HOT 1
- perf-sanity-tests: analyze subtests of each perf test
- docker: support fedora:38 HOT 1
- docker: support ubuntu:22.04 HOT 1
- docker: support archlinux:base HOT 1
- locktorture: enable extra torture_type
- kunit: enable extra tests
- hwsim: resolve git clone timeout HOT 1
- Data_store error after lkp run in fedora:38 & archlinux:base docker
- Suggest using the `Closes:` tag instead of the `Link:` one HOT 2
- lkp install throws multiple errors HOT 1
- fail to install will-it-scale on fedora 38 HOT 1
- Incorrect paths and deprecated methods
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lkp-tests.