appendix:guidebook_authoring:benchmarks
                
                                                            
                    
This is an old revision of the document!
Guidebook Benchmarks
Standards
5 Is the Number
Present Results in Seconds
Make a Change -> Rerun All Tests
-  If you make any changes to a test file, please rerun all associated tests. - 
-  In order to make this possible, put a new - files:page up for any new benchmarks created.
 
-  Ensure that all associated files are included on this - files:page, and are not links to other areas.
 - 
-  This duplication is a violates terseness, but is important to guarantee that tests only rely on one page. 
-  Otherwise (in the event where many benchmarks refer to a single file location) it is impossible to know what to update and keep all results consistent. 
 
 
-  If you do not believe that you have changed results prove it rather than assuming that a change would not happen. 
-  In order for this guide to be usable, benchmarks must be implicitly worthy of trust. 
 
Make All Efforts to Only Test Your Target
-  Keep all test cases as simple as possible. 
-  Consider your tests carefully, and make efforts to only test the desired language feature. - 
- 
-  In all cases we stored the results of a read, but we did not maintain them between iterations. 
-  E.g., We did not want to consider the cost of - listgrowth outside of- stdin.readlines()where storing an entire file is unavoidable.
 
 
- 
-  Instead of the  - time-  builtin we opted for the standard Python3  timeit-  library to isolate only writes. 
 
 
-  Some level of judgment on the part of the author at the time is necessary to ensure that these efforts are made correctly. 
 
                    
                                     
                appendix/guidebook_authoring/benchmarks.1534370025.txt.gz · Last modified: 2018/08/15 16:53 by jguerin