How useful is the free, open source Scilab/Xcos vs Matlab/Simulink?

One of my clients has requested a dynamic fuel cell power system model, so I investigated both Matlab/Simulink and Scilab/Xcos modelling environments.  These packages are able to model complex electrical power and control systems using a graphical block diagram modelling tool.  Here is an example of Xcos’ DC DC Boost Converter:

Xcos DC DC Boost Converter

Xcos DC DC Boost Converter

To model the fuel cell power system in Matlab/Simulink requires the addon toolboxes SimPowerSystems and SimScape.  This raises the license price to about $12,000 USD plus further yearly license fees (~20%).  An advantage of Scilab/Xcos is that the software is free.  Simulink/SimPowerSystems has a more extensive library of predefined component or subsystem models than Xcos, yet Xcos has the most important components defined.  Simulink/SimPowerSystems has much better documentation, which is typical of commercial software vs open source software.  There are some Xcos documentation and tutorials available, covering the most important topics.

I had difficulties getting both packages up and running on my Windows 7 computer as in both cases there were problems in getting external C compilers connected.  With Matlab, I was able to go back and forth with their helpful customer support to resolve the issue.  With Scilab, I had to do internet searches for forum posts by other users with the similar problems.  In both cases I was able to get the packages running with some delay.  Matlab has better user support as it is easier to call someone for help, but Scilab has a fair number of users posting problems and solutions, and with a bit of sleuthing, resolving my problem was not too difficult.

Working inside the environments is pretty similar. Simulink/SimPowerSystems has the most capability, yet Xcos is impressively capable, with more and more tools being published by their user community.  Perhaps Xcos is roughly 80-90% of the capability of Simulink/SimPowerSystems for my application, and good enough for what I need at this time.

One further advantage of Xcos is that it is much easier to share models, as it is easy to get access to the modelling environment.  With Matlab/Simulink, you really need your other collaborators and clients to have Matlab/Simulink available, and that is an expensive proposition, especially being tied to yearly maintenance fees.

Xcos is steadily improving in capability, documentation, tutorials, and links to other programs.  It has come a long way in the last three years.  By being free, it is more accessible to a larger community, which will help accelerate its development and usefulness through the network effect.  For many practitioners, it is a great choice.

Matlab/Simulink will have some threat from Scilab/Xcos in the lower end of the market, yet I expect it to lead the high-end market as it continues to add capability, modules, applications, and linkages to other programs.  For larger institutions, it is a good choice.

Product Comparison

Product Comparison for my application

Overall I am pleasantly surprised and impressed with Scilab/Xcos, and while it takes a little more time and effort to be productive with it than Matlab/Simulink, for my application, it is worth it.

 

Movie Review: The Challenger Disaster

1108f-challenger-bd-50p

9/10

This excellent 90 minute movie brings to life the great story of Richard Feynman’s investigation into the Space Shuttle Challenger disaster.  I found the movie had good pacing, rang very true to what actually happened, and had very good acting by William Hurt as Feynman, Bruce Greenwood, and Brian Dennehy.

The movie is based on Feynman’s book “What Do You Care What Other People Think”, which is also a terrific book.  The story follows Feynman’s instrumental role in uncovering the truth about the root cause of the disaster – both technically and politically.  Feynman’s personal heroism against strong headwinds and personal illness makes for a compelling story.

The movie does great justice to key scenes – the dramatic O-ring experiment, the personal difficulties of Feynman, to the political conspiracy surrounding and both supporting and opposing his investigation.

oring

William Hurt’s performance was able to draw me in emotionally into the story.  I’ve not really been a big fan of William’s performance in other movies, as I didn’t like him as Duke Leto in Frank Herbert’s Dune (too stiff), and he was ok in Dark City.  Yet in this movie he was able to capture Feynman’s unique character very well.

The movie inspired me to re-read “What Do You Care What Other People Think?”, which I had read over 20 years ago.  The overall story of Feynman and the Challenger continues to be sharply relevant today with widespread complex system development, that have significant safety consequences, large multi-stakeholder interests, often conflicting, and sometimes these interests are inclined to bury the truth.

One of the most interesting short stories in “What Do You Care What Other People Think” is the story of Richard, and his first wife, Arline.  It is a great love story, despite its tragic nature.  The book’s title came from her.

This movie (and book) is highly recommended!

For a successful technologyreality must take precedence over public relations, for nature cannot be fooled. – Richard Feynman

Is your Complex System Project on track for Ultraquality Implementation?


Boeing_777_above_clouds,_crop

We expect complex systems like an airplane, a nuclear powerplant, or a LNG plant to practically never fail.  Yet systems are becoming increasingly complex, and the more components there are in a system, the more reliable each component must be, to the point where, at the element level, defects become impractical to measure within the time and resources available.

Additionally, in future, our expectations will increase for complex systems durability, reliability, total cost of ownership, and return on investment, as energy and raw materials increase in cost.

Ultraquality is defined as a level of quality so demanding that it is impractical to measure defects, much less certify the system prior to use.  It is a limiting case of quality driven to an extreme, a state beyond acceptable quality limits (AQLs) and statistical quality control.

One example of ultraquality is commercial aircraft failure rates.  Complexity is increasing: the Boeing 767 has 190k software lines of code, whereas the Boeing 777 has 4 million lines of code, and the Boeing 787 about 14 million lines of code.  The allowable failure rate of the flight control system continues to be one in 10 billion hours, which is not testable, yet the number of failures to date is consistent with this order of magnitude.

sloc

Another example of ultraquality is a modern microprocessor, which has the same per chip defect rates despite the number and complexity of operations have increased by factors of thousands.  The corresponding failure rate per individual operation is now so low to be almost unmeasurable.

 

What are the best practices to achieve ultraquality in complex systems?

Meier and Rechtin make a strong case that while analytical techniques like Six Sigma and Robust Engineering Design will get you close, the addition of heuristic methods will get you over the top.  This includes using a zero defects approach not only in manufacturing, but also design, engineering, assembly, test, operation, maintenance, adaptation, and retirement – the complete lifecycle.

There are many examples how analytical techniques alone underestimate failure; for example the nuclear industry analysis of core damage frequency is off by an order of magnitude in reality.

fukushima

A sample of applicable heuristics include:

  • Everyone in the production line is a customer and a supplier [also extended to each person in the development team – engineering, supply, etc.]
  • The Five Why’s
  • Some of the worst failures are system failures
  • Fault avoidance is preferable to fault tolerance in system designs
  • The number of defects remaining in a system after a given level of test or review  (design review, unit test, system test, etc.) is proportional to the number found during that test or review.
  • Testing can indicate the absence of defects in a system only when: (1) The test intensity is known from other systems to find a high percentage of defects, and (2) Few or no defects are discovered in the system under test.

whatwedontknow

[pie chart courtesy Boeing.  FBW = Fly By Wire]

There is a lot more material on “how-to” in the works of Meier and Rechtin, Juran, and Phadke.

Ultraquality requires ultraquality throughout all the development processes, and by extension throughout the delivering organization.  That is, certify a lack of defects in the final product by insisting on a lack of defects anywhere in the development process.  Developing both the processes and organization to achieve this state is possible, is being done in some organizations, and allows for superior business performance.

There are many examples how organizations lack ultraquality in their processes or organization.  General Motors is under heavy criticism these days following the Valukas report, which exposes the poor organization and development practices.  This is anecdotally impacting the GM dealers and turning them into ghost towns.

So back to the tagline: is your complex development project on track for ultraquality implementation?

Model Based Systems Engineering Readiness for Complex Product Development

iron-man_tony-stark-desk_1sm

The increasing nature of complexity of today’s systems and systems-of-systems make it increasingly difficult for systems engineers and program managers to ensure their product satisfies the customer. As an example, in this year alone, General Motors has recalled more vehicles in the US than it made in 2009 to 2013 – and it is only May!

n-GM-570

May 21, 2014, http://www.huffingtonpost.com/2014/05/21/gm-recall-more-than-sold_n_5367478.html

Over the past 5-10 years, a formal discipline of Model Based Systems Engineering (MBSE) has been developed by the Systems Engineering community to catch up with rigorous model tools available to the other domains, such as CAD/FEA for mechanical engineering, or VMGSim/Hysys for chemical engineering, or C++ code generators for software development.

The combination of increased complexity, increased domain model usage, and drive towards virtual product development and simulation capability have made it very difficult to make sure there is consistency in all the models, documents, and data sets for a complex product.  Without one single truth in the data set, there is increased likelihood of downstream problems.  MBSE is now in a position to allow systems engineers develop a rigorous coherent flexible system model that can be an integrating design and development function across the program lifecycle, enabling this future vision:

mbse vision

Source: INCOSE MBSE Workshop, Jan 2014

The main benefits of MBSE are:

  • Reduced rework, earlier visibility into risk and issues
  • Reduced cycle time, reduce development cost, cost avoidance
  • Better communication and more effective analysis
  • Potential for increased re-use (product line reusability: engineering done once, reuse elsewhere)
  • Ability to generate and regenerate current reports and work products
  • Knowledge management (long-term and short-term)
  • Single source of truth
  • Competitiveness (our partners and competitors are doing it)
  • Think about how much of an engineer’s time is spent on data management rather than critical thinking (Change that ratio! Shift the nature of my hours)

While models have always been a part of the document-centric systems engineering process, they are typically limited in scope or duration, and not integrated into a coherent model of the entire system.

MBSE uses a graphical modelling language, called SysML, which is an extension of UML (Universal Modelling Language) developed by the software industry.  The SysML language and a MBSE modelling tool allow systems engineers to develop descriptive models of the system.  As an example:

sysml model

Source: INCOSE MBSE Workshop, Jan 2014

There are several MBSE tools available, Rhapsody (IBM), MagicDraw (No Magic), and Enterprise Architect (Sparx).  These tools have been successfully been used by companies like Ford, Boeing, or Lockheed Martin, and they continue to improve.  MBSE is still relatively early in development as compared to other domain tools, like CAD, FEA, or PLM (Product Lifecycle Management), but is now at a stage that it can have an immediate impact on the developing system.  There are many connecting tools to PLM tools or Requirements Management tools like Rational DOORS or other disciplines.

I have found it really tough (and to a certain degree impractical) using the document-centric systems engineering approach to keep all the various design documents and models up to date and consistent with each other.  I’ve been using MBSE tools from both NoMagic and Sparx, and they are both pretty good at capturing all the necessary systems engineering information in one model.  There aren’t many good tutorials and examples available to the public domain, but still enough to learn from.  I have been able to steadily and productively apply MBSE to my system design and analysis work.

I highly recommend any organization that is doing complex product development to consider MBSE.  It is the future for fast and high quality product development.

 

Winning Strategy for Canada’s Hockey Gold

Canada’s Men’s Hockey team won gold in the 2014 Sochi Olympics yesterday with both a winning strategy and a committed execution of that strategy.

mike_babcock.jpg.size.xxlarge.letterbox

Until the final game and result, it was not fully clear what their strategy was, nor whether it would meet the goal of a gold medal.  The Canadian team was clearly loaded with scoring talent, but so were the Americans, Russians, Swedes, and Finnish teams.  The International Ice Rink size is larger than the NHL rinks, which changes the game to require faster skaters and skill players, and is the development environment for the European Hockey players.  In the first four games of the tournament for the Canadians, they did not score as many goals as expected with only a 2-1 overtime win vs. the Finns and a 2-1 win over Latvia.  The US team had been scoring on average 5 goals per game over their first four wins, including a 5-2 win over the Czech team in the Quarterfinals.

The Semi Finals

The story going into the Semi-final game between the US and Canada was that the Canadian team was slightly stronger overall on paper, with a little more depth, but that the US team was clearly playing much better.  Analysts were pretty split on who would win, as they seemed evenly matched, the outcome was difficult to call, and if anything the momentum seemed to be on the US side.

la-sp-on-sochi-olympics-canada-usa-hockey-20140221

The Canadians won the game 1-0, and while the score was close, most observers commented that the Canadian team was pretty dominant defensively, and the American’s really struggled to get any sustained pressure or second chances.

The Bronze Medal Game

The American’s went up against the Finns in the Bronze medal game, and again were favoured, because the Finnish team was not as talented or deep, with only about half their roster from the NHL.  The American team was embarrassed 5-0, and went home without a medal.  In this game, the Finnish team showed that when they play their European style of hockey they are very strong, and the US team showed that a lack of heart, intensity, and poise, and fell apart.

The Gold Medal Game

Going into the Gold medal game, again, there were many doubts on whether the Canadians would win.  The Finnish team showed the night before that the European style of hockey can demolish a more talented US team.  The Swedish team was stronger than the Finnish team, and beat them 2-1 in the semi-finals.  The Canadians still seemed to have trouble scoring goals, though had also been demonstrating very few goals against.  The Canadian goaltender, Carey Price, had not been tested much in the tournament, and had not had to make that many difficult saves, whereas the Swedish Goaltender, Henrik Lundqvist had seemed to have demonstrated stronger performances in the past 5 games.  Overall the story going into the game was that either team had a good chance of winning.

crosby

The Canadian’s won 3-0 in a dominating, clinical performance and won Gold.

The Strategy

The medal was won with a complete commitment to a team defense model that emphasized offensive puck possession.  It was not a sit back and turtle team defense.  Instead it relied on a combination of the forwards coming back to help the defense when the other team had the puck, and then when the Canadians had the puck, they kept it as much as possible through puck possession, strong backcheck and forecheck, and help from the defensemen in the attacking zone.  The Canadian’s scored only 17 goals in 6 games the 2014 tournament, whereas in the previous 2010 Olympics they scored 35 goals.  But in the 2014 Olympics, they only allowed 3 goals over those 6 games, and had two shutouts in the semi-final and gold medal game.   While they did not score a lot, they didn’t need to.  The other teams really could not generate sufficient chances against the Canadian team.

Defense wins championships.

This strategy was unrolled to the team in August 2013 at the Calgary training camp, and used Ball Hockey to demonstrate the system of team defense.  They had to use ball hockey because for insurance reasons they couldn’t use an ice surface.

Babcock makes most of 'walk-through' practice

During the Olympic tournament, observers could see that the players had totally bought into this system, from their between game interviews, to their short shifts, to their selfless play.

“It was a feeling of absolute trust,” was how Jonathan Toews described the feeling of being one of Canada’s team members. “As soon as you jump over the boards you’re going out there to do the exact same thing the line before you did, and to keep that momentum going. Even when we got up two goals, we never stopped. We just kept coming at ’em, backchecking, forechecking. We didn’t give ’em any space. It was fun to watch and fun to be a part of.”

“That’s why we won,” said Steve Yzerman, the architect of a golden back-to-back. “Our best players said, ‘Guys, we’re going to win. We don’t care about individual statistics.’”

Mike Babcock, the Team Canada coach, said as much before he left a post-game press conference to partake in the closing ceremonies.

“Does anybody know who won the scoring race? Does anybody care?” he said.

The answer to those questions were, for the record: Yes, Phil Kessel. And, um, probably not.

Babcock continued.

“Does anyone know who won the gold medal?”

Babcock wanted a point clarified, mind you, when the talk turned to defensive genius. It should be remembered that Canada, he essentially said, wasn’t partaking in Euro-brand defensive hockey. Canada wasn’t mimicking the bronze-winning Finns collapsing in a shell around Tuukka Rask, begging you to beat one of the world’s best goalies from beyond the human blockade.

“When we talk about great defence, sometimes we get confused,” Babcock said. “Great defence means you play defence fast and you have the puck all the time so you’re always on offence. We out-chanced these teams big-time. We didn’t score (as much as they would have liked). But we were a great offensive team. That’s how we coached it. That’s what we expected. That’s what we got. We didn’t ask guys to back up.”

“Canada was much, much better,” said Marts, the Swedish coach.

Concluding remarks

There were high expectations placed on the Canadian team, and a nervous concern through the first four games of the tournament.  The only revealing of the strategy during the tournament was what we saw on the ice during the games.  After the gold medal game was won, the team revealed the thinking behind their brand of team defense system, and their strategy became clear. It is never good to be to clear about your strategy during the tournament, as it can be countered by your opponents.

While execution of the strategy on the ice was an important part of the result, developing the team first culture was an inherent part of the strategy and improved the likelihood of executing the system on the ice. We all know that a coherent high performing team always outperforms a looser collection of individuals.  In this case, the strategy of how to develop that team and the system of play during the games was a strong key for winning Gold.

Insight and Heuristics in System Architecting

 One insight is worth a thousand analyses

iron man simul

-Engineering and Art: Iron Man 3

Systems Architecting is as much art as it is science.  The best book on this subject is from Maier and Rechtin, and I highly recommend it.

ArtArchitecting

-Maier and Rechtin, The Art of Systems Architecting, second edition, CRC Press, 2000

One of the best section of the book deals with using the method of Heuristics in architecting.  Insight, or the ability to structure a complex situation in a way that greatly increases one’s understanding of it, is strongly guided by lessons learned from one’s own or others’ experiences and observations.  Given enough lessons, their meaning can be codified into “heuristics”.  Heuristics are an essential complement to analytics.

As in the previous post, where the system engineer is to consider the whole and apply wisdom, Maier and Rechtin also promote the use of wisdom but they note that “Wisdom does not come easy”

  • Success comes from wisdom
  • Wisdom comes from experience
  • Experience comes from mistakes

While required mistakes can come from the profession as a whole, or from predecessors, it also highlights the importance of systems engineering education from those skilled in the art.

Examples of heuristics are:

  1. Don’t assume that the original statement of the problem is necessarily the best, or even the right one
  2. In partitioning, choose the elements so that they are as independent as possible; that is elements of low external complexity and high internal complexity
  3. Simplify. Simplify. Simplify.
  4. Build in and maintain options as long as possible in the design and implementation of complex systems.  You will need them.
  5. In introducing technological and social change, how you do it is often more important than what you do
  6. If the politics don’t fly, the hardware never will.
  7. Four questions, the Four Whos, need to be answered as a selfconsistent set if a system is to succeed economically; namely, who benefits?, who pays? and, as appropriate, who loses?
  8. Relationships among the elements are what give systems their added value
  9. Sometimes it is necessary to expand the concept in order to simplify the problem.
  10. The greatest leverage in architecting is at the interfaces.

-taken from Maier and Rechtin, The Art of Systems Architecting, second edition, CRC Press, 2000

 Heuristics are tools, and must be used with judgement.  The ones presented in the book are trusted and time-tested.  They may not apply specifically to your complex systems architecting work, though I think you will find most of them do.

Just Enough Systems Engineering

I’ve been putting together a teaching course on Systems Engineering, and I came across a gem of an eBook by Dwayne Phillips, called “Just Enough Systems Engineering”.  I have found it very useful reference to help me develop the course, as it is filled with systems engineering wisdom.

Sys Eng Enough?

There is large amount of systems engineering material available from various sources – from Incose, many books, many presentations, and many research papers.  It can be hard to summarize the large body of knowledge into a useful teaching course.  I find this eBook unique in that is comes from the angle of how to use systems engineering practices from a very practical perspective.   And it is free!

http://dwaynephillips.net/systemsengineering/JustEnoughSystemsEngineering.pdf

There are many quotations in the book that I find very useful:

What does a systems engineer do?

“The systems engineer examines the entire system and applies a little wisdom.”

I think this is a good perspective as it helps simplify a very complex topic and help a practicing systems engineer remember the importance of using good judgement, which typically comes from experience – both of the practicing engineer, and any learnings from the experience of others.

When and how much systems engineering to apply

“Use systems engineering when the system and project are bigger than any two people.”

The importance of asking questions in the best way

“Here is where much of systems engineering collapses.”

Systems engineers have to work with a wide variety of people –the client, the builder, the development team, etc., and asking questions of these people in the right way is a key skill.  Many engineers have very strong problem solving skills.  A systems engineer also needs strong people skills, and this eBook has a wealth of material on how best to ask questions in the systems engineering context.  I haven’t found any other references that explain this angle well, and give very practical suggestions on how to succeed.

For anyone interested in becoming a better engineer, I highly recommend this eBook.

Systems Approach to Health Care

Applying a systems approach to health care significantly improves quality, speed, economics, and customer satisfaction.  I have now experienced both the North American and Japanese Health Care Systems, and I can now see the clear benefits of the systems engineering approach applied to technology, activities, and people (i.e. using the Design Structure Matrix approach).

Figure 1 Personal Hospital Pager, Japanese Hospital

Figure 1 Personal Hospital Pager, Japanese Hospital

 

When you are a patient at a Japanese Hospital, you get a Personal Hospital Pager, so they can immediately notify you of your next diagnostic or consultation appointment and potentially slot you in earlier.  Japanese hospitals operate like a modern manufacturing plant or logistics center with a fully integrated Information Technology System with all scheduling and results and reports in the digital domain.  Japanese hospitals have all the diagnostic procedures in the hospital – MRI, CT, PET, etc., and the waiting times are so short, there isn’t really a wait time.  The doctor is able to order all the necessary diagnostic or treatment procedures from her PC and you basically go from one station to another in the hospital all in one day.

 

Figure 2 Personal RFID Card and Diagnostic Schedule

Figure 2 Personal RFID Card and Diagnostic Schedule

 

As a patient, you also get a Personal Card with a RFID chip that stores key data (Fig 2, middle left) and a printout of your daily schedule, in this case seven diagnostic or consultative events.  In Canada, it is often weeks between each event, and sometimes much longer, such as for a MRI or CT scan.

From a patient perspective, the speed and very short delay times is both comforting, and must increase the likelihood of successful treatment for any degenerative disease.  From a macro perspective, a comparison of health care systems bears this out.

Figure 3 Health Care Systems Comparison

Figure 3 Health Care Systems Comparison

 

What is striking for Japan is the relatively low health care expenditure, good results in life expectancy or infant mortality, the high amount of diagnostic equipment, the number of hospital beds with a typical nurse count, the low amount of “out-of-pocket payments”, and the short wait times.  A MRI here is about $100, vs. $1500 in the US.  For major surgery, in Canada you stay in the hospital for 5-6 days, in Japan, you come in 2-3 days before and you stay 21 days until they are really, really, sure you are ok (with lots of diagnostic tests).  Patients that have experienced both the Canadian and Japanese system very much prefer Japan.

How can the Japanese System be so good and efficient?  While Japanese people may be more fit and have a better diet than North Americans, they are also one of the fastest “greying” populations, and smoking is more prevalent in Japan than in North America.  After experiencing this system first hand, the high level of integration, full information technology system, modern logistics/manufacturing process, competition between hospitals, and overall design of the system to keep results high and costs low, have forced process innovation in the right areas.  Is the Japanese System perfect?  Not by any means, but compared to North America, it is at another level.

There is great benefit to applying the systems approach to any system.  In case of the Japanese Health Care System, it even includes ensuring the political side is appropriately managed.  When the Japanese physicians tried to game the system by ordering more MRI’s, the next year, the Japanese Government lowered the MRI fee by 35%.

Figure 4 Automatic Reentry

Figure 4 Automatic Reentry

 

While there are many administrators and health care professionals and technicians at a Japanese Hospital, there are also automated kiosks everywhere for many procedures, such as checking-in, paying any out-of-pocket expenses, urine tests, etc. which makes the whole time spent at the hospital very smooth and efficient.  You put your Personal RFID card from Figure 2 into these kiosks, complete your procedure, your card and file is updated, and move onto the next station.

For Health Care, the system design is kind of easy.  We experience full IT tracking systems in our daily lives, like Amazon.com’s reviewing, inventory, purchasing, and packaging tracking systems.  We know that modern manufacturing process logistics systems exist.  We have ways of measuring results, like outcomes, wait times, or customer satisfaction.  Japan has integrated these best practices into a low cost system.  It is just good business, and a good human system.  North America needs to shamelessly borrow this better system from Japan, and tailor and improve where necessary, as was done with the Taguchi Quality method etc.

Why Are Risk Assessments So Underestimated?

In light of the terrible train derailment tragedy in La Megantic this week, one question is “why are risk assessments so underestimated?”

Figure 1: Train Derailment Consequences La Megantic July 2013

Figure 1: Train Derailment Consequences La Megantic July 2013

 

Engineers, scientists, and managers do risk assessments all the time as a normal course of business.  Yet system failures occur much more frequently than the risk assessments report.

Typical Nuclear power industry/regulator estimates of core damage frequency are between 1 in 20,000 or 1 in 50,000 reactor years, which mean a core damage incidence every 40-100 years; or in our history, there should have been less than 1 incidence so far.  Yet so far we have had more than 10 such incidents.  The risk assessment and management methodology in this case is underestimating the risk by over an order of magnitude.

While it is too early still to understand the root cause and systemic failures in the La Megantic train derailment, clearly the risks were underestimated.  The appropriate safeguards – design or human – failed.

There are many risk assessment techniques used by industry and regulators: Failure Modes and Effects Analysis, Probabilistic Risk Assessment, and Hazard and Operability Study (to name a few).   Where they tend to underestimate risk has been studied by many independent sources* and has been found to be especially weak in human factors:

  • complacency in design
  • failure to anticipate vulnerabilities from external sources to the system
  • unjustified trust in safety margins
  • poor training
  • cutting corners to cut costs
  • cosy relationship between regulators and the regulated
  • cultural factors
  • handovers between individuals or groups from different organizations

Hollywood likes to produce action/disaster movies that illustrate the consequences of accidents and incidents.  Sometimes they are overdramatic (the fuel cell explosion in Terminator 3 was like a huge nuclear bomb!  If only fuel cells could be so powerful…).

 

Figure 2: Fuel cell explosion (!) in Terminator 3

Figure 2: Fuel cell explosion (!) in Terminator 3

 

Other times Hollywood seems to be pretty prescient, as in the movie Unstoppable, though that had a happy ending.

Considering the catastrophic consequences of the La Megantic derailment, we need to reconsider oil transport – and not necessarily in favour of pipelines, as pipelines have their unique risks and consequences as well.  The La Megantic derailment is bad for oil overall.

One advantage of many clean energy sources is that the inherent accident risk and consequences are much lower than conventional forms.  When assessed through that lens, the overall project and financial returns can be superior.

 

* Contact me for links

 

Open Source Thermodynamic Process Simulator Review – DWSIM

Many complex energy systems use thermodynamic process simulators for systems design, especially if fueled by oil, natural gas, methanol, hydrogen or landfill gas.   The leading industry simulators are Aspen HysysProSim, and VMGSim.   These simulators are very powerful, as they can model very complex chemical and biological reactions, networks of unit operations with many interactions requiring multivariable convergence, and both steady-state and dynamic control systems.  As they provide significant value, especially to Oil and Gas, the license fees per seat are on the order of $10,000/year.

Open Source software continues to gain traction with consumers; for example OpenOffice has approximately between 10-20% of the market, with Microsoft Office being most of the rest.   Could there be a viable Open Source alternative for thermodynamic process simulators, given the smaller market and similar software complexity?

DWSIM is the only real Open Source thermodynamic simulator with a very similar capability to the steady-state versions of Hysys, ProSim or VMGSim, including a graphical flowsheet and integrated spreadsheet.

Figure 1: DWSIM User Interface

Figure 1: DWSIM User Interface

 

I have recently used both the latest versions of a leading industry thermodynamic process simulator and DWSIM.  I am impressed with DWSIM, and I am currently productive with this tool.  It has the core functionality for steady-state mass and energy balances, and is powerful enough to examine part power and start-up states.  It is easy to use, easy to report, and convergence times are good.  While it does not have dynamic capability, nor is that capability on the author’s roadmap, the vast majority of studies for most users are done in steady-state mode.  The overall support materials are good.  One advantage of being open source is that the underlying engine and calculations can be more transparent to the user, so for some that can be good for learning and checking.

Every process simulator has its own learning curve because they all have somewhat different convergence routines and architecture, and especially for converging very complex networks, so to become truly productive will require some time investment.  I have simulated high complexity systems in DWSIM so far, and overall it works fine.

The author, Daniel Medeiros, is currently active and planning further releases, which means the product will continue to improve.

For many users, such as students, or some small businesses, part-time practitioners, or those that need primarily the core functionality only, DWSIM is a great choice.  So far the big industry players with proprietary software do not offer light versions of their software for lower costs, so there is a demand for packages like DWSIM for this end of the market.  DWSIM has the “first-mover” Open Source advantage, and can set the standard in this segment of the market, and will take some of this market from the big players.

Overall, I am pleased with the product for what it is.  For most large companies, the industry leading simulators are an overall better value proposition, because of the higher power and features.  For the other end of the market, DWSIM is well worth trying, or may be the only economic option.