DrupalCon Amsterdam 2014: Automated Performance Tracking
Speakers: Fabianx
Front-End, Back-End, Scalability and Database - Lets make Drupal fast!
Motivation - Why does core need automated performance tracking?
Web Page Performance is important. Period.
A performant Drupal Core is an important base for other sites to build upon; to ensure they are performant as well. But how does Core Development ensure that Core is performant at all times?
Drupal has a performance core gate, but the development of Drupal 8 has shown that it is hard to enforce this gate. The only real measurement of how fast or slow Drupal has been at a given time has been provided by single Apache Benchmark (ab) runs and how long the test suite took at a certain point in time. Both are very unreliable measurements as the test suite inherently had quite some different performance characteristics than what is important for a production site.
Particularly the following questions cannot be answered today easily:
Is Drupal 8 slower than Drupal 7 in the back-end?
What about front-end performance? Is Drupal 8 faster there? (yes!)
How many database calls is Drupal 8 doing per page request?
How many bytes are transferred on average?
...
However those are important to know.
Automating the process
The twig team and some other core contributors on the other hand have measured almost every single patch with xhprof-kit and XHProf-Lib by hand, which gave great insight into the back-end of certain issues and also revealed performance problems else where in core, but while that process was already automated, there was still a lot to be done to setup the scenarios / code path to test a certain patch, which was still painful and hard work.
How can we optimize this process more? How can we have performance testing / tracking be as simple as our current test suite. How can we even combine it with our test suite?
What needs to happen for that? What do we need to track? What can we track?
This session wants to explore this field and then go over into a fluent discussion with core maintainers on this important topic.
In addition you might probably want to re-use some of the ideas to track the performance of your own projects over time as well.
The ultimate guide to automated performance tracking.
This session will feature an overview of what automated performance testing / monitoring is and then go into the details of what types are there, what can be tracked, which problems are there and what could be implemented tomorrow and how ultimately Drupal Core and all sites you are building could profit from this.
What is automated performance testing / monitoring?
Types of performance testing:
Front-End: HAR and Browser Timing API; behat
Back-End: xhprof-kit and XHProfLib
Scalability: jMeter and behat
DB: Number of queries; slow queries; queries without indexes
(Remote "services" invocation overhead -- less relevant for Core )
Quantitative vs. Qualitative monitoring:
Speed vs Number of function calls / DB calls / http calls
Bytes transferred
Cache Hits vs. Cache Misses (Files)
Cache Hits vs. Cache Misses (Render Cache)
File System Accesses (readdir, stat)
Aggregation Quality
Problems of Speed Performance Monitoring:
Dedicated server; high number of runs (100)
Difficult with VMs
Minimum vs. average and median
How to report data:
HAR: Available for Download
XHProf-Kit / Lib: Data uploaded and available online.
Approaches Drupal could take:
FrontEnd: Behat Test Suite that is measured while it goes
BackEnd: Need an API to setup site and create users / entities to setup a scenario
Could even backport back to D7 and do retrospective performance testing
Needs to live in Core (tested); devel_generate broke more times than it worked (Moshe)
Scalability: jMeter or behat
DB: Automated query log sweep after scalability test run
Learning Goals
What is automated performance testing?
What approaches can be used now?
What approaches could be used in the future?
How can Drupal profit from this?
Front-End, Back-End, Scalability and Database - Lets make Drupal fast!
Motivation - Why does core need automated performance tracking?
Web Page Performance is important. Period.
A performant Drupal Core is an important base for other sites to build upon; to ensure they are performant as well. But how does Core Development ensure that Core is performant at all times?
Drupal has a performance core gate, but the development of Drupal 8 has shown that it is hard to enforce this gate. The only real measurement of how fast or slow Drupal has been at a given time has been provided by single Apache Benchmark (ab) runs and how long the test suite took at a certain point in time. Both are very unreliable measurements as the test suite inherently had quite some different performance characteristics than what is important for a production site.
Particularly the following questions cannot be answered today easily:
Is Drupal 8 slower than Drupal 7 in the back-end?
What about front-end performance? Is Drupal 8 faster there? (yes!)
How many database calls is Drupal 8 doing per page request?
How many bytes are transferred on average?
...
However those are important to know.
Automating the process
The twig team and some other core contributors on the other hand have measured almost every single patch with xhprof-kit and XHProf-Lib by hand, which gave great insight into the back-end of certain issues and also revealed performance problems else where in core, but while that process was already automated, there was still a lot to be done to setup the scenarios / code path to test a certain patch, which was still painful and hard work.
How can we optimize this process more? How can we have performance testing / tracking be as simple as our current test suite. How can we even combine it with our test suite?
What needs to happen for that? What do we need to track? What can we track?
This session wants to explore this field and then go over into a fluent discussion with core maintainers on this important topic.
In addition you might probably want to re-use some of the ideas to track the performance of your own projects over time as well.
The ultimate guide to automated performance tracking.
This session will feature an overview of what automated performance testing / monitoring is and then go into the details of what types are there, what can be tracked, which problems are there and what could be implemented tomorrow and how ultimately Drupal Core and all sites you are building could profit from this.
What is automated performance testing / monitoring?
Types of performance testing:
Front-End: HAR and Browser Timing API; behat
Back-End: xhprof-kit and XHProfLib
Scalability: jMeter and behat
DB: Number of queries; slow queries; queries without indexes
(Remote "services" invocation overhead -- less relevant for Core )
Quantitative vs. Qualitative monitoring:
Speed vs Number of function calls / DB calls / http calls
Bytes transferred
Cache Hits vs. Cache Misses (Files)
Cache Hits vs. Cache Misses (Render Cache)
File System Accesses (readdir, stat)
Aggregation Quality
Problems of Speed Performance Monitoring:
Dedicated server; high number of runs (100)
Difficult with VMs
Minimum vs. average and median
How to report data:
HAR: Available for Download
XHProf-Kit / Lib: Data uploaded and available online.
Approaches Drupal could take:
FrontEnd: Behat Test Suite that is measured while it goes
BackEnd: Need an API to setup site and create users / entities to setup a scenario
Could even backport back to D7 and do retrospective performance testing
Needs to live in Core (tested); devel_generate broke more times than it worked (Moshe)
Scalability: jMeter or behat
DB: Automated query log sweep after scalability test run
Learning Goals
What is automated performance testing?
What approaches can be used now?
What approaches could be used in the future?
How can Drupal profit from this?