CCI Home  |  Drexel Home
The NextGenAA group is:
Project summary:

NextGen systems are envisioned to be composed of human and automated agents interacting with dynamic flexibility in the allocation of authority and autonomy. The analysis of such concepts of operation requires methods for verifying and validating that the range of roles and responsibilities potentially assignable to the human and automated agents does not lead to unsafe situations. Such analyses must consider the conditions that could impact system safety including the environment, human behavior and operational procedures, methods of collaboration and organization structures, policies and regulations.

Agent-based simulation has shown promise toward modeling such complexity but requires a tradeoff between fidelity and the number of simulation runs that can be explored in a reasonable amount of time. Model checking techniques can verify that the modeled system meets safety properties but they require that the component models are of sufficiently limited scope so as to run to completion. By analyzing simulation traces, model checking can also help to ensure that the simulation's design meets the intended analysis goals. Thus leveraging these types of analysis methods can help to verify operational concepts addressing the allocation of authority and autonomy. To make the analyses using both techniques more efficient, common representations for model components, methods for identifying the appropriate safety properties, and techniques for determining the set of analyses to run are required.

We are constructing an integrated suite of simulation, model checking, and trace analysis tools that can all be driven from a single "top" model description of the interaction and capabilities of human and automated agents and environmental factors. Specifically, agents are modeled with normative and erroneous behavior, under varying collaboration and function allocation regimes. Instantiated models can be translated for use in model checkers and used in our enhanced work model that computes simulation environment. To ensure that the necessarily abstracted models suitable for model checkers capture appropriate fidelity, simplifications are being derived in systematic (and later automated) ways by computing abstractions that are over-approximations, so that any invariant that is true of an approximation is also true of the top model and, dually, any unsafe scenarios in the top model will also be present in the approximations. To ensure that the abstractions meet the intended analysis goals, these cases can be mined automatically to yield improved abstractions, a process called "counterexample guided abstraction refinement" (CEGAR). These techniques are well-known in hardware verification; this project will break new ground by applying them to dynamical systems.

To ensure the design of the simulation meets the intended analysis goals, we are extending our trace analysis techniques by developing a trace semantics that can express important properties of the agent language and we will develop an automated system to run the trace analysis on the simulation and demonstrate its ability to find properties of interest, including violation of safety properties. Problems identified in the model checking process will be instantiated in the agent-based simulation. Initially the counter examples will be mined in a manual process in order to generate simulation test cases. In later phases, these processes will be automated.

Papers supported by the NASA award with supplemental materials:
  • Feigh, K. M. & Pritchett, A. R. (2014). Requirements for effective function allocation: A critical review. Journal of Cognitive Engineering and Decision Making. 8(1), 23-32. DOI:10.1177/1555343413490945 paper link paper link
  • Pritchett, A. R., Kim, S. Y, & Feigh, K. M. (2014). Measuring human-automation function allocation. Journal of Cognitive Engineering and Decision Making. 8(1), 52-77. DOI: 10.1177/1555343413490166 paper link paper link
  • Pritchett, A. R., Kim, S. Y, & Feigh, K. M. (2014). Modeling human-automation function allocation. Journal of Cognitive Engineering and Decision Making. 8(1), 33-51. DOI: 10.1177/1555343413490944 paper link paper link
  • Bolton, M. L. & Bass, E. J. (2013). Evaluating human-human communication protocols with miscommunication generation and model checking. In G. Brat, N. Rungta, & A. Venet (Eds.). NASA Formal Methods, Lecture Notes in Computer Science. , Vol. 7871, (pp. 48-62). Berlin Heidelberg: Springer-Verlag. paper link paper link
  • Bolton, M. L. & Bass, E. J. (2013). Generating erroneous human behavior from the strategic knowledge in task analytic models and evaluating its impact on system safety with model checking. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 43(6), 1314-1327. DOI: 10.1109/TSMC.2013.2256129 paper link paper link
  • Bolton, M. L., Bass, E. J., & Siminiceanu, R. I. (2013). Using formal verification to evaluate human-automation interaction, a review. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 43(3), 488-503. paper link paper link
  • Feigh, K. M. & Pritchett, A.R. (2013). Modeling the work of humans and automation in complex operations. 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Jan 7-10, 2013, Grapevine, TX. paper link
  • Gelman, G., Feigh, K.M. & Rushby, J. (2013). Example of a complementary use of model checking and agent-based simulation. 2013 IEEE International Conference on Systems, Man, and Cybernetics (pp. 900-905). Oct 13-16, 2013, Manchester, England. paper link
  • Griffith, D. & Gunter, E. (2013). LiquidPi: Inferrable dependent session types. In G. Brat, N. Rungta, & A. Venet (Eds.). NASA Formal Methods, Lecture Notes in Computer Science. Vol. 7871 (pp. 185-197). Berlin Heidelberg: Springer-Verlag. paper link paper link
  • Rushby, J. (2013). Logic and epistemology in safety cases. Computer Safety, Reliability, and Security: Proceedings of SafeComp 32 (pp. 1-7). Sep 24-27, 2013, Toulouse, France. paper link paper link
  • Bass, E. J., Bolton, M. L., Feigh, K. M., Gunter, E. L., & Rushby, J. (2012). Toward an integrated model checking, theorem proving and simulation framework for analyzing authority and autonomy. Workshop on Formal Methods in Human-Machine Interaction (Formal H), May 28, 2012, London, UK. paper link paper link
  • Bolton, M. L. & Bass, E. J. (2012). Model checking human-automation interaction with enhanced operator function model. Workshop on Formal Methods in Human-Machine Interaction (Formal H), May 28, 2012, London, UK paper link paper link
  • Bolton, M. L. & Bass, E. J. (2012). Using model checking to explore checklist-guided pilot behavior. The International Journal of Aviation Psychology. 22(4), 343-366. DOI: 10.1080/10508414.2012.718240 paper link paper link
  • Bolton, M. L., Bass, E. J., & Siminiceanu, R. I. (2012). Generating phenotypical erroneous human behavior to evaluate human-automation interaction using model checking. The International Journal of Human-Computer Studies. 70(11), 888-906. DOI: 10.1016/j.ijhcs.2012.05.010 paper link paper link
  • Mansky, D. & Gunter, E. (2012) Using locales to define a rely-guarantee temporal logic. Interactive Theorem Proving (ITP). August 13-14, 2012, Princeton, New Jersey. paper link paper link
  • Yasmeen, A., Feigh, K. M., Gelman, G., & Gunter, E. L. (2012) Formal analysis of safety-critical system simulations. In Proceedings of the 2nd International Conference on Application and Theory of Automation in Command and Control Systems (ATACCS 2012). May 29-31, 2012. London, England. paper link paper link
  • Bass, E. J., Bolton, M. L., Feigh, K. M., Griffith, D., Gunter, E., Mansky, W. & Rushby, J. (2011). Toward a multi-method approach to formalizing human-automation interaction and human-human communications. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics (pp. 1817-1824). October 9-12, 2011, Anchorage, Alaska. paper link paper link
  • Bass, E. J., Feigh, K. M., Gunter, E. & Rushby, J. (2011). Formal modeling and analysis for interactive hybrid systems. 4th International Workshop on Formal Methods for Interactive Systems (FMIS), June 21, 2011, Limerick, Ireland. paper link paper link
  • Bolton, M. L. & Bass, E. J. (2011). Evaluating human-automation interaction using task analytic behavior models, strategic knowledge-based erroneous human behavior generation, and model checking. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics (pp. 1788-1794). October 9-12, 2011, Anchorage, Alaska. paper link paper link
  • Bolton, M. L., Siminiceanu, R. I., & Bass, E. J. (2011). A systematic approach to model checking human-automation interaction using task-analytic models. IEEE Transaction and Systems Man and Cybernetics, Part A: Systems and Humans, 41(5), 961-976. paper link paper link
  • Yasmeen, A. & Gunter, E.L. (2011). Automated framework for formal operator task analysis. In ISSTA '11 Proceedings of the 2011 International Symposium on Software Testing and Analysis (pp. 78-88). July 17-21, 2011, Toronto, ON, Canada. paper link paper link
  • Yasmeen, A. & Gunter, E.L. (2011). Robustness For protection envelopes with respect to human task variation. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics (pp. 1809 - 1816). October 9-12, 2011, Anchorage, Alaska. paper link paper link
  • Bolton, M. L. & Bass, E. J. (2010). Formally verifying human-automation interaction as part of a system model: Limitations and tradeoffs. Innovations in Systems and Software Engineering: A NASA Journal 6(3), 219-31. paper link paper link
  • Bolton, M. L. & Bass, E. J. (2010). Using task analytic models and phenotypes of erroneous human behavior to discover system failures using model checking. In Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society (pp. 992-996). Sep. 27-Oct. 1, 2010, San Francisco, CA. paper link paper link
  • Bolton, M. L. & Bass, E. J. (2010). Using task analytic models to visualize model checker counterexamples. In Proceedings of the 2010 IEEE International Conference on Systems, Man, and Cybernetics (pp. 2069-2074). Oct. 10-13, 2010, Istanbul, Turkey. paper link paper link