software analysis: a roadmapDaniel Jackson & Martin RinardMIT Laboratory for Computer ScienceICSE · Limerick, Ireland · June 7, 2000
every analysis involves
- an artifact language
- a property language
operational or declarative?
- a mechanism
sound, scalable, accurate?
Promela LTL SPINassigns CTL SMVStatechart ad hoc StatemateCSP CSP FDRLTSA FSP LTL
VDM ad hoc IFAD Z Z+ Z/EVESAlloy Alloy Alloy
Java Java types javacC++ FO logic PSAC ad hoc lint any RMT graphsC SQL CIA
PSA: Parametric Shape AnalysisSagiv, Reps, Wilhelm
RMT: Reflexion Model ToolMurphy, Notkin, Sullivan
- why analysis?
- do errors matter?
analyzing code vs. models
- will leave you at forks without directions
- doesn’t stick to main roads
- doesn’t take you to your destination
- Z: language matters
- SMV: analysis matters
two kinds of result
- checking an adder
- modelling a railway
- in software, both matter
- design language with analysis
- no formal basis, no analysis
This is not a good gameSaid our fish as he lit“No, I do not like it,Not one little bit!”
the apple doesn’t fall far from the tree
Oxford photo from Jonathan Bowen’s collection; Pittsburgh photo by Norm Schumm, with permission
- gentle slope: say a bit, analyze a bit
- few environmental assumptions
- maximal implementation freedom
- conjunction: refine by extension
- if property language is artifact language
can use property as model
can continue analysis without fixing bug
- need interactive tools
- language can help
model expressed asabstract programeasy for programmersno frame problem
model expressed aslogical formula
can start with lessinconsistency
do errors matter?
- catching errors early saves costly fix later
- few showstopper bugs
- subtle bugs are cheap to fix
- real cost is flawed design
- analysis can help simplify design
- “a difficulty deferred is a difficulty doubled”*
- find a new way to sell software model checking …
analysis in the face of
- missing source code
- libraries, frameworks
- multiple languages
- process unit once
- use spec for clients
- can’t escape specs
- user’s help may be inevitable
where do specs come from?
- the tool (eg, Rinard’s PEG)
handle all contexts
but need just a few
huge or imprecise spec
- the user (eg, ESC)
- a wizard (eg, Houdini)
- sound: results can be trusted
why be unsound?
- often good enough (PreFIX, Murphy’s LSME)
- easier to implement (Womble)
- soundness compromises accuracy (ESC)
- soundness & accuracy conflict
- engineer can do for a dime …
- depends on task at hand
- no guarantees, ever?
is it good enough?
- how detailed alias analysis?
- flow sensitive? context sensitive?
- higher-order functions?
- need both approaches
- can’t predict anyway
the short view
- only end-to-end results count
- show value for a given task
the long view
- the internet took 30 years
- surprises (eg, ML types for C)
- judge ideas, not applications
symbiosis: models & code
check conformance of code to models
- Bandera, RMT, Pathfinder, etc
how analysis helps models
- if not conforming, what predictive use?
how models help analysis
- focus on properties of interest
- no fumbling in the dark (eg, k-limiting)
- models as induction hypotheses
- look for good matches
object models & shape analysis?
complex implementation ok
- easy to implement, more implementors (eg, JVM)
- tool writers pick easy languages (eg, Java vs. C++)
can run for a week
- occasionally true
- but interactive tools far better
can evaluate analyses by reading papers
- not your examples, numbers suspect
- you have more/less expertise
- publication culture rewards hiding failures
must be linear time, etc
- better: how will Moore help?
- 1980–2000: 3 hours to 1 second