Emergy is a simple API and command line utilty for calculating emergy via the modified track summing algorithm developed at Henri Tudor.
Setup the google test framework in the project directory:
svn checkout http://googletest.googlecode.com/svn/trunk/ gtest-svn
Build and run tests:
make
make tests
Run the calculator on test files in which we've broken up the inputs into different entries for each source in order to test aggregation:
make odum-test
./emergy_calculator -g test-files/odum96-figure6.8.graph.dat -i test-files/odum96-figure6.8.inputs.dat
reading graph from test-files/odum96-figure6.8.graph.dat...
graph: test-files/odum96-figure6.8.graph.dat
read 9 lines from test-files/odum96-figure6.8.graph.dat
reading input parameters from test-files/odum96-figure6.8.inputs.dat...
processed 6 node=flow pairs with total input = 30000
minFlow = 0
found 3 unique inputs
STATISTICS:
longest path: 4
complete paths: 6
loop violations: 9
flow lost to loop violations: 37500
minflow violations: 0
flow lost to minflow violations: 0
OUTPUTS:
output: Y = 7500
output: Z = 30000
Ran odum-test ...
The first command line calculator is a very simple but usable example of a tool that uses the emergy library. It is entirely obsoleted by emergy_calculator
.
Here's the usage specification for calc_emergy
:
USAGE: ./calc_emergy <graph file> <flow multiplier=0.0> <node=flow>
It takes a process graph file in format: NODEA NODEB FLOW where FLOW is a multiplier of emergy that is greater than 0.0 and less than or equal to 1.0 leading out from NODEA to NODEB.
Flow multiplier defaults to 0.0 which means that flows are treated exactly and calculations can be very slow with this setting. Start with 0.01 to get a quick calculation done as a first pass. Getopt is not used so you have to specify something.
Input flows in the form of node=flow
refer how much total flow is input to node
(e.g. N3=10.0
). The nodes are assumed unique so multiple flows to a single input should be summed prior to using the command. SEE BELOW: emergy_calculator
accepts raw flows and then sums them prior to running the calculation.
Run the command line calculator from the project directory:
./calc_emergy test-files/odum96-figure6.8.graph.dat 0.0 A=3000 B=7000 C=20000
reading graph from test-files/odum96-figure6.8.graph.dat...
graph: test-files/odum96-figure6.8.graph.dat
read 9 lines from test-files/odum96-figure6.8.graph.dat
read 3 inputs
longest path: 4
complete paths: 6
loop violations: 9
flow lost to loop violations: 37500
minflow violations: 0
flow lost to minflow violations: 0
output: Y = 7500
output: Z = 30000
PATHS
750 A D E Y
3000 A D Z
1750 B D E Y
7000 B D Z
5000 C A D E Y
20000 C A D Z
The next example is uses the slightly more sophisticated calculator and has been used for research publications see for example [1]. It uses a file to store inputs (either inline or one input per line) in the same node=value
format but sums up nonunique entries into total inputs.
Here's the usage specification for emergy_calculator
:
USAGE: ./emergy_calculator <graph file> <input file> [flow multiplier=0.0] [--print-source]
Using an input file with inline format (multiple flows for each input from the source article to test aggregation):
cat test-files/odum96-figure6.8.inputs.dat
A=1000 A=2000 B=3000 B=4000 C=10000 C=10000
We can run the calculator on the same data as the previous example (notice redundant flows will get aggegated):
cat test-files/odum96-figure6.8.inputs.dat
A=1000 A=2000 B=3000 B=4000 C=10000 C=10000
./emergy_calculator -g test-files/odum96-figure6.8.graph.dat -i test-files/odum96-figure6.8.inputs.dat
reading graph from test-files/odum96-figure6.8.graph.dat...
graph: test-files/odum96-figure6.8.graph.dat
read 9 lines from test-files/odum96-figure6.8.graph.dat
reading input parameters from test-files/odum96-figure6.8.inputs.dat...
processed 6 node=flow pairs with total input = 30000
minFlow = 0
found 0 unique inputs
STATISTICS:
longest path: 4
complete paths: 6
loop violations: 9
flow lost to loop violations: 37500
minflow violations: 0
flow lost to minflow violations: 0
OUTPUTS:
output: Y = 750
output: Z = 3000
If we want to break up inputs into sources (e.g. test-files/odum96-figure6.8.sourced.inputs.dat) and add a -p
flag to the command:
cat test-files/odum96-figure6.8.sourced.inputs.dat
S1 A=1000 B=3000 C=10000
S2 A=2000 B=4000 C=10000
./emergy_calculator -g test-files/odum96-figure6.8.graph.dat -i test-files/odum96-figure6.8.sourced.inputs.dat -p
reading graph from test-files/odum96-figure6.8.graph.dat...
graph: test-files/odum96-figure6.8.graph.dat
read 9 lines from test-files/odum96-figure6.8.graph.dat
Source: S1 had 3 inputs
Source: S2 had 3 inputs
minFlow = 0
found 0 unique inputs
STATISTICS:
longest path: 4
complete paths: 12
loop violations: 18
flow lost to loop violations: 37500
minflow violations: 0
flow lost to minflow violations: 0
OUTPUTS:
output: Y = 750
output: Z = 3000
OUTPUT BY SOURCE:
S1 Y=250.0000 Z=1000.0000
S2 Y=500.0000 Z=2000.0000
Emergy is discussed in detail in Wikipedia's entry on Emergy. The basic track summing algorithm and an example is found in Odum, 1996.
A paper [Marvuglia, Benetto, Rugani, Rios] [1] presenting the method and algorithm will be presented at Enviroinfo 2011.
[1]: "A scalable implementation of the track summing algorithm for Emergy calculation with Life Cycle Inventory databases"
Emergy was written by Gordon Rios (gparker at gmail) and is released under the simplified 2-clause BSD license.