--- a/README.md Fri Nov 08 21:58:13 2013 +0000
+++ b/README.md Mon Nov 11 22:57:04 2013 +0000
@@ -6,7 +6,17 @@
of benchmarks and performance regressions. CalipeL has been heavily
inspired by [SUnit][1] and [Caliper][2]
-Features:
+The basic ideas behind are:
+
+- Benchmarking and (especially) interpreting benchmark results is always
+ a monkey business. Therefore the framework should be as simple as
+ possible so everybody understands the meaning of numbers it gives.
+- Benchmark results should be kept and managed at single place so one
+ can view and retrieve all past benchmark results pretty much the same
+ way as one can view and retrieve past versions of the software from
+ VCS.
+
+## Features
- *simple* - creating a benchmark is as simple as writing a method in a class
- *flexible* - a special set-up and/or warm-up routines could be specified at benchmark-level as well as set of parameters to allow fine-grained measurements