A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 8 Issue 7
Jul.  2021

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Adam J. Hepworth, Daniel P. Baxter, A. Hussein, Kate J. Yaxley, E. Debie, and Hussein A. Abbass, "Human-Swarm-Teaming Transparency and Trust Architecture," IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1281-1295, Jul. 2021. doi: 10.1109/JAS.2020.1003545
Citation: Adam J. Hepworth, Daniel P. Baxter, A. Hussein, Kate J. Yaxley, E. Debie, and Hussein A. Abbass, "Human-Swarm-Teaming Transparency and Trust Architecture," IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1281-1295, Jul. 2021. doi: 10.1109/JAS.2020.1003545

Human-Swarm-Teaming Transparency and Trust Architecture

doi: 10.1109/JAS.2020.1003545
Funds:  This work was supported by United States Office of Naval Research-Global (ONR-G) (N629091812140)
More Information
  • Transparency is a widely used but poorly defined term within the explainable artificial intelligence literature. This is due, in part, to the lack of an agreed definition and the overlap between the connected — sometimes used synonymously — concepts of interpretability and explainability. We assert that transparency is the overarching concept, with the tenets of interpretability, explainability, and predictability subordinate. We draw on a portfolio of definitions for each of these distinct concepts to propose a human-swarm-teaming transparency and trust architecture (HST3-Architecture). The architecture reinforces transparency as a key contributor towards situation awareness, and consequently as an enabler for effective trustworthy human-swarm teaming (HST).

     

  • loading
  • [1]
    Z. Chen, H. M. Lu, S. Q. Tian, J. L. Qiu, T. Kamiya, S. Serikawa, and L. Z. Xu, “Construction of a hierarchical feature enhancement network and its application in fault recognition, ” IEEE Trans. Ind. Inform, 2020.
    [2]
    P. Wang, D. Y. Wang, X. T. Zhang, X. Li, T. Peng, H. M. Lu, and X. L. Tian, “Numerical and experimental study on the maneuverability of an active propeller control based wave glider,” Appl. Ocean Res., vol. 104, Article No. 102369, Nov. 2020. doi: 10.1016/j.apor.2020.102369
    [3]
    G. Beni, “From swarm intelligence to swarm robotics,” in Int. Workshop on Swarm Robotics. Berlin, Heidelberg: Springer, 2005, pp. 1–9.
    [4]
    M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarm robotics: A review from the swarm engineering perspective,” Swarm Intell., vol. 7, no. 1, pp. 1–41, Jan. 2013. doi: 10.1007/s11721-012-0075-2
    [5]
    P. M. Zhu, W. Dai, W. J. Yao, J. C. Ma, Z. W. Zeng, and H. M. Lu, “Multi-robot flocking control based on deep reinforcement learning,” IEEE Access, vol. 8, pp. 150397–150406, Aug. 2020. doi: 10.1109/ACCESS.2020.3016951
    [6]
    E. Şahin, “Swarm robotics: From sources of inspiration to domains of application,” in Int. Workshop on Swarm Robotics. Berlin, Heidelberg: Springer, 2005, pp. 10–20.
    [7]
    W. D. Nothwang, M. J. McCourt, R. M. Robinson, S. A. Burden, and J. W. Curtis, “The human should be part of the control loop?” in Proc. Resilience Week, Chicago, IL, USA, 2016, pp. 214–220.
    [8]
    A. Kolling, S. Nunnally, and M. Lewis, “Towards human control of robot swarms,” in Proc. 7th ACM/IEEE Int. Conf. Human-Robot Interaction, Boston, MA, USA, 2012, pp. 89–96.
    [9]
    A. Hussein and H. Abbass, “Mixed initiative systems for human-swarm interaction: Opportunities and challenges,” in Proc. 2nd Annu. Systems Modelling Conf., Canberra, ACT, Australia, 2018, pp. 1–8.
    [10]
    H. A. Abbass, “Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust,” Cogn. Comput., vol. 11, no. 2, pp. 159–171, Jan. 2019. doi: 10.1007/s12559-018-9619-0
    [11]
    H. A. Abbass, E. Petraki, K. Merrick, J. Harvey, and M. Barlow, “Trusted autonomy and cognitive cyber symbiosis: Open challenges,” Cogn. Comput., vol. 8, no. 3, pp. 385–408, Jun. 2016. doi: 10.1007/s12559-015-9365-5
    [12]
    A. Hussein, S. Elsawah, and H. A. Abbass, “Trust mediating reliability-reliance relationship in supervisory control of human-swarm interactions,” Hum. Factors, vol. 62, no. 8, pp. 1237–1248, Dec. 2020. doi: 10.1177/0018720819879273
    [13]
    A. Hussein, S. Elsawah, and H. A. Abbass, “The reliability and transparency bases of trust in human-swarm interaction: Principles and implications,” Ergonomics, vol. 63, no. 9, pp. 1116–1132, May 2020. doi: 10.1080/00140139.2020.1764112
    [14]
    J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, Jan. 2004. doi: 10.1518/hfes.46.1.50.30392
    [15]
    J. Lee and N. Moray, “Trust, control strategies and allocation of function in human-machine systems,” Ergonomics, vol. 35, no. 10, pp. 1243–1270, Oct. 1992. doi: 10.1080/00140139208967392
    [16]
    M. R. Endsley, “From here to autonomy: Lessons learned from human-automation research,” Hum. Factors, vol. 59, no. 1, pp. 5–27, Feb. 2017. doi: 10.1177/0018720816681350
    [17]
    J. E. Mercado, M. A. Rupp, J. Y. C. Chen, M. J. Barnes, D. Barber, and K. Procci, “Intelligent agent transparency in human-agent teaming for multi-UXV management,” Hum. Factors, vol. 58, no. 3, pp. 401–415, May 2016. doi: 10.1177/0018720815621206
    [18]
    W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu, “Definitions, methods, and applications in interpretable machine learning,” Proc. the National Academy of Sciences of the United States of America, vol. 116, no. 44, pp. 22071–22080, Oct. 2019. doi: 10.1073/pnas.1900654116
    [19]
    S. Hara and K. Hayashi, “Making tree ensembles interpretable,” arXiv preprint arXiv: 1606.05390, 2016.
    [20]
    E. Debie, K. Shafi, C. Lokan, and K. Merrick, “Reduct based ensemble of learning classifier system for real-valued classification problems,” in Proc. IEEE Symp. Computational Intelligence and Ensemble Learning, Singapore, 2013, pp. 66–73.
    [21]
    E. Debie, K. Shafi, K. Merrick, and C. Lokan, “An online evolutionary rule learning algorithm with incremental attribute discretization,” in Proc. IEEE Congr. Evolutionary Computation, Beijing, China, 2014, pp. 1116–1123.
    [22]
    B. Ustun and C. Rudin, “Supersparse linear integer models for optimized medical scoring systems,” Mach. Learn., vol. 102, no. 3, pp. 349–391, Mar. 2016. doi: 10.1007/s10994-015-5528-6
    [23]
    A. Jobin, I. Marcello, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat. Mach. Intell., vol. 1, no. 9, pp. 389–399, Sep. 2019. doi: 10.1038/s42256-019-0088-2
    [24]
    M. Turilli and L. Floridi, “The ethics of information transparency,” Ethics Inf. Technol., vol. 11, no. 2, pp. 105–112, Mar. 2009. doi: 10.1007/s10676-009-9187-9
    [25]
    J. Y. Chen, K. Procci, M. Boyce, J. L. Wright, A. Garcia, and M. Barnes, “Situation awareness-based agent transparency,” Army Research Laboratory, Tech. Rep. ARL-TR-6905, Jan. 2014.
    [26]
    K. A. Roundtree, M. A. Goodrich, and J. A. Adams, “Transparency: Transitioning from human-machine systems to human-swarm systems,” J. Cogn. Eng. Decis. Mak., vol. 13, no. 3, pp. 171–195, Sept. 2019. doi: 10.1177/1555343419842776
    [27]
    J. A. Adams, J. Y. C. Chen, and M. A. Goodrich, “Swarm transparency,” in Proc. Companion ACM/IEEE Int. Conf. Human-Robot Interaction, Chicago, IL, USA, 2018, pp. 45–46.
    [28]
    R. Liu, F. Jia, W. H. Luo, M. Chandarana, C. Nam, M. Lewis, and K. Sycara, “Trust-aware behavior reflection for robot swarm self-healing,” in Proc. 18th Int. Conf. Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 2019, pp. 122–130.
    [29]
    S. Nunnally, P. Walker, A. Kolling, N. Chakraborty, M. Lewis, K. Sycara, and M. Goodrich, “Human influence of robotic swarms with bandwidth and localization issues,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Seoul, South Korea, 2012, pp. 333–338.
    [30]
    K. A. Roundtree, M. D. Manning, and J. A. Adams, “Analysis of human-swarm visualizations,” Proc. Hum. Fact. Ergon. Soc. Annu. Meet., vol. 62, no. 1, pp. 287–291, Sep. 2018. doi: 10.1177/1541931218621066
    [31]
    M. Schranz, G. A. Di Caro, T. Schmickl, W. Elmenreich, F. Arvin, A. Şekercioğu, and M. Sende, “Swarm intelligence and cyber-physical systems: Concepts, challenges and future trends,” Swarm Evol. Comput., vol. 60, Article No. 100762, Feb. 2021. doi: 10.1016/j.swevo.2020.100762
    [32]
    A. Bagnato, R. K. Bíró, D. Bonino, C. Pastrone, W. Elmenreich, R. Reiners, M. Schranz, and E. Arnautovic, “Designing swarms of cyber-physical systems: The H2020 CPSwarm project: Invited paper,” in Proc. Computing Frontiers Conf., Siena, Italy, 2017, pp. 305–312.
    [33]
    A. Hussein, S. Elsawah, and H. A. Abbass, “Swarm collective wisdom: A fuzzy-based consensus approach for evaluating agents confidence in global states,” in Proc. IEEE Int. Conf. Fuzzy Systems, Glasgow, United Kingdom, 2020, pp. 1–8.
    [34]
    U. S. Premarathne and S. Rajasingham, “Trust based multi-agent cooperative load balancing system (TCLBS),” Future Gener. Comput. Syst., vol. 112, pp. 185–192, Nov. 2020. doi: 10.1016/j.future.2020.01.037
    [35]
    M. R. Endsley, “Toward a theory of situation awareness in dynamic systems,” Hum. Factors, vol. 37, no. 1, pp. 32–64, Mar. 1995. doi: 10.1518/001872095779049543
    [36]
    J. Y. C. Chen, S. G. Lakhmani, K. Stowers, A. R. Selkowitz, J. L. Wright, and M. Barnes, “Situation awareness-based agent transparency and human-autonomy teaming effectiveness,” Theor. Issues Ergonom. Sci., vol. 19, no. 3, pp. 259–282, Feb. 2018. doi: 10.1080/1463922X.2017.1315750
    [37]
    J. L. Wright, J. Y. Chen, and S. G. Lakhmani, “Agent transparency and reliability in human-robot interaction: The influence on user confidence and perceived reliability,” IEEE Trans. Hum. Mach. Syst., vol. 50, no. 3, pp. 254–263, Jun. 2020. doi: 10.1109/THMS.2019.2925717
    [38]
    W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu, “Interpretable machine learning: Definitions, methods, and applications,” arXiv preprint arXiv: 1901.04592, 2019.
    [39]
    T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artif. Intell., vol. 267, pp. 1–38, Feb. 2019. doi: 10.1016/j.artint.2018.07.007
    [40]
    B. Kim, R. Khanna, and O. O. Koyejo, “Examples are not enough, learn to criticize! criticism for interpretability,” in Proc. 30th Int. Conf. Neural Information Processing Systems, Morehouse Lane, Red Hook, NY, United States, 2016, pp. 2288–2296.
    [41]
    O. Biran and C. Cotton, “Explanation and justification in machine learning: A survey,” in Proc. 17th Int. Joint Conf. Artificial Intelligence IJCAI, Melbourne, Australia, 2017, pp. 8–13.
    [42]
    M. Sierhuis and S. B. Shum, “Human-agent knowledge cartography for e-science: NASA field trials at the Mars Desert Research Station,” in Knowledge Cartography. London: Springer, 2008, pp. 287–305.
    [43]
    A. Lazaridou, A. Peysakhovich, and M. Baroni, “Multi-agent cooperation and the emergence of (natural) language,” arXiv preprint arXiv: 1612.07182, 2016.
    [44]
    A. Das, S. Kottur, J. M. F. Moura, S. Lee, and D. Batra, “Learning cooperative visual dialog agents with deep reinforcement learning,” in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 2970–2979.
    [45]
    J. Andreas, A. Dragan, and D. Klein, “Translating neuralese, ” arXiv preprint arXiv: 1704.06960, 2017.
    [46]
    D. St-Onge, F. Levillain, E. Zibetti, and G. Beltrame, “Collective expression: How robotic swarms convey information with group motion,” J. Behav. Rob., vol. 10, no. 1, pp. 418–435, Dec. 2019.
    [47]
    A. Suresh and S. Martínez, “Human-swarm interactions for formation control using interpreters,” Int. J. Control Autom. Syst., vol. 18, no. 8, pp. 2131–2144, Aug. 2020. doi: 10.1007/s12555-019-0497-3
    [48]
    T. Chakraborti, A. Kulkarni, S. Sreedharan, D. E. Smith, and S. Kambhampati, “Explicability? Legibility? Predictability? Transparency? Privacy? Security? The emerging landscape of interpretable agent behavior,” in Proc. Int. Conf. Autom. Plann. Schedul., pp. 86–96, Jul. 2019.
    [49]
    M. Fox, D. Long, and D. Magazzeni, “Explainable planning, ” arXiv preprint arXiv: 1709.10256, 2017.
    [50]
    W. R. Swartout and J. D. Moore, “Explanation in second generation expert systems,” Second Generation Expert Systems. Berlin, Heidelberg: Springer, 1993, pp. 543–585.
    [51]
    M. Minksy, The Society of Mind. New York: Simon and Schuster, 1985.
    [52]
    D. Doran, S. Schulz, and T. R. Besold, “What does explainable AI really mean? A new conceptualization of perspectives,” arXiv preprint arXiv: 1710.00794, 2017.
    [53]
    M. van Lent, W. Fisher, and M. Mancuso, “An explainable artificial intelligence system for small-unit tactical behavior,” in Proc. 16th Conf. Innovative Applications of Artifical Intelligence, San Jose, California, USA, 2004, pp. 900–907.
    [54]
    J. R. Josephson and S. G. Josephson, Abductive Inference: Computation, Philosophy, Technology. Cambridge: Cambridge University Press, 1994.
    [55]
    M. Turek, Explainable artificial intelligence (XAI) [Online]. Available: https://www.darpa.mil/program/explainable-artificial-intelligence, Accessed on: April 2020.
    [56]
    E. Diamant, “Designing artificial cognitive architectures: Brain inspired or biologically inspired?” Procedia Comp. Sci., vol. 145, pp. 153–157, Dec. 2018. doi: 10.1016/j.procs.2018.11.023
    [57]
    F. K. Došilović, M. Brčić, and N. Hlupić, “Explainable artificial intelligence: A survey,” in Proc. 41st Int. Convention on Information and Communication Technology, Electronics and Microelectronics, Opatija, Croatia, 2018, pp. 210–215.
    [58]
    E. Nyamsuren and N. A. Taatgen, “Human reasoning module,” Biol. Inspired Cogn. Arch., vol. 8, pp. 1–18, Apr. 2014.
    [59]
    D. Amir and O. Amir, “Highlights: Summarizing agent behavior to people,” in Proc. 17th Int. Conf. Autonomous Agents and MultiAgent Systems, Richland, SC, 2018, pp. 1168–1176.
    [60]
    J. van der Waa, J. van Diggelen, K. van den Bosch, and M. Neerincx, “Contrastive explanations for reinforcement learning in terms of expected consequences,” arXiv preprint arXiv: 1807.08706, 2018.
    [61]
    W. L. Johnson, “Agents that learn to explain themselves,” in Proc. 12th National Conf. Artificial Intelligence, Menlo Park, 1994, pp. 1257–1263.
    [62]
    J. R. Searle, “Minds, brains, and programs,” Behav. Brain Sci., vol. 3, no. 3, pp. 417–424, Sept. 1980. doi: 10.1017/S0140525X00005756
    [63]
    A. M. Turing, “Computing machinery and intelligence,” Mind, vol. 59, pp. 433–460, Oct. 1950.
    [64]
    A. Rosenfeld and A. Richardson, “Explainability in human-agent systems,” arXiv preprint arXiv: 1904.08123, 2019.
    [65]
    R. R. Hoffman, G. Klein, and S. T. Mueller, “Explaining explanation for “explainable AI”,” Proc. Hum. Fact. Ergon. Soc. Annu. Meet., vol. 62, no. 1, pp. 197–201, Sep. 2018. doi: 10.1177/1541931218621047
    [66]
    J. B. Lyons, “Being transparent about transparency: A model for human-robot interaction,” in Proc. AAAI Spring Symp., Palo Alto, California, 2013, pp. 48–53.
    [67]
    C. Nam, H. Li, S. Li, M. Lewis, and K. Sycara, “Trust of humans in supervisory control of swarm robots with varied levels of autonomy,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Miyazaki, Japan, 2018, pp. 825–830.
    [68]
    Z. Gong and Y. Zhang, “Behavior explanation as intention signaling in human-robot teaming,” in Proc. 27th IEEE Int. Symp. Robot and Human Interactive Communication, Nanjing, China, 2018, pp. 1005–1011.
    [69]
    J. Penders, L. Alboul, U. Witkowski, A. Naghsh, J. Saez-Pons, S. Herbrechtsmeier, and M. El-Habbal, “A robot swarm assisting a human fire-fighter,” Adv. Robot., vol. 25, no. 1–2, pp. 93–117, Jan. 2011. doi: 10.1163/016918610X538507
    [70]
    C. University, predict. Cambridge University Press, 2020. [Online]. Available: https://dictionary.cambridge.org/dictionary/essential-british-english/predict
    [71]
    E. J. Stanek III, J. da Motta Singer, and V. B. Lencina, “A unified approach to estimation and prediction under simple random sampling,” J. Statist. Plann. Inference, vol. 121, no. 2, pp. 325–338, Apr. 2004. doi: 10.1016/S0378-3758(03)00114-9
    [72]
    N. B. Masese, G. M. Muketha, and S. M. Mbuguah, “Interface features, program complexity and memorability as indicators of learnability of mobile social software,” Int. J. Sci. Res., vol. 6, no. 10, pp. 1527–1533, Oct. 2017.
    [73]
    S. Dimitriadis, A. Kouremenos, and N. Kyrezis, “Trust-based segmentation: Preliminary evidence from technology-enabled bank channels,” Int. J. Bank Market., vol. 29, no. 1, pp. 5–31, Feb. 2011. doi: 10.1108/02652321111101356
    [74]
    S. Behzadi, B. Schelling, and C. Plant, “ITGH: Information-theoretic granger causal inference on heterogeneous data,” in Proc. Pacific-Asia Conf. Knowledge Discovery and Data Mining. Singapore: Springer, 2020, pp. 742–755.
    [75]
    B. M. Muir, “Trust in automation: Part I. theoretical issues in the study of trust and human intervention in automated systems,” Ergonomics, vol. 37, no. 11, pp. 1905–1922, 1994. doi: 10.1080/00140139408964957
    [76]
    S. M. Merritt and D. R. Ilgen, “Not all trust is created equal: Dispositional and history-based trust in human-automation interactions,” Hum. Factors, vol. 50, no. 2, pp. 194–210, Apr. 2008. doi: 10.1518/001872008X288574
    [77]
    K. Drnec and J. S. Metcalfe, “Paradigm development for identifying and validating indicators of trust in automation in the operational environment of human automation integration,” in Proc. Int. Conf. Augmented Cognition. Toronto, Canada: Springer, 2016, pp. 157–167.
    [78]
    G. Klien, D. D. Woods, J. M. Bradshaw, R. R. Hoffman, and P. J. Feltovich, “Ten challenges for making automation a “team player” in joint human-agent activity,” IEEE Intell. Syst., vol. 19, no. 6, pp. 91–95, Nov.-Dec. 2004. doi: 10.1109/MIS.2004.74
    [79]
    L. L. Constantine, “Trusted interaction: User control and system responsibilities in interaction design for information systems,” in Proc. Int. Conf. Advanced Information Systems Engineering. Berlin, Heidelberg: Springer, 2006, pp. 20–30.
    [80]
    R. Marín, P. J. Sanz, and A. P. Del Pobil, “The UJI online robot: An education and training experience,” Autonom. Rob., vol. 15, no. 3, pp. 283–297, Nov. 2003. doi: 10.1023/A:1026220621431
    [81]
    R. C. Winck, S. M. Sketch, E. W. Hawkes, D. L. Christensen, H. Jiang, M. R. Cutkosky, and A. M. Okamura, “Time-delayed teleoperation for interaction with moving objects in space,” in Proc. IEEE Int. Conf. Robotics and Automation, Hong Kong, China, 2014, pp. 5952–5958.
    [82]
    C. Sherstan, J. Modayil, and P. M. Pilarski, “A collaborative approach to the simultaneous multi-joint control of a prosthetic arm,” in Proc. IEEE Int. Conf. Rehabilitation Robotics, Singapore, 2015, pp. 13–18.
    [83]
    L. Ding, H. B. Gao, Z. Q. Deng, Y. K. Li, G. J. Liu, H. G. Yang, and H. T. Yu, “Three-layer intelligence of planetary exploration wheeled mobile robots: Robint, virtint, and humint,” Sci. China Technol. Sci., vol. 58, no. 8, pp. 1299–1317, Jun. 2015. doi: 10.1007/s11431-015-5853-9
    [84]
    M. Panzirsch, H. Singh, M. Stelzer, M. J. Schuster, C. Ott, and M. Ferre, “Extended predictive model-mediated teleoperation of mobile robots through multilateral control,” in Proc. IEEE Intelligent Vehicles Symposium, Changshu, China, 2018, pp. 1723–1730.
    [85]
    K. Yerex, D. Cobzas, and M. Jagersand, “Predictive display models for tele-manipulation from uncalibrated camera-capture of scene geometry and appearance,” in Proc. IEEE Int. Conf. Robotics and Automation, Taipei, China, 2003, pp. 2812–2817.
    [86]
    P. M. Walker, S. Nunnally, M. Lewis, A. Kolling, N. Chakraborty, and K. Sycara, “Investigating neglect benevolence and communication latency during human-swarm interaction,” in AAAI Fall Symp. Human Control of Bio-Inspired Swarms, Arlington, VA, 2012.
    [87]
    J. C. Lane, C. R. Carignan, and D. L. Akin, “Time delay and communication bandwidth limitation on telerobotic control,” in Mobile Robots XV and Telemanipulator and Telepresence Technologies VII, vol. 4195, Boston, MA, United States, 2001, pp. 405–419.
    [88]
    J. W. Park, C. D. Kim, and J. M. Lee, “Concurrent bilateral teleoperation over the internet,” in Proc. IEEE Int. Symp. Industrial Electronics Proceedings, Pusan, South Korea, 2001, pp. 302–307.
    [89]
    D. Q. Huy, I. Vietcheslav, and G. S. G. Lee, “See-through and spatial augmented reality-a novel framework for human-robot interaction,” in Proc. 3rd Int. Conf. Control, Automation and Robotics, Nagoya, Japan, 2017, pp. 719–726.
    [90]
    M. T. Rosenstein, A. H. Fagg, S. C. Ou, and R. A. Grupen, “User intentions funneled through a human-robot interface,” in Proc. 10th Int. Conf. Intelligent User Interfaces, San Diego, California, USA, 2005, pp. 257–259.
    [91]
    A. Zignoli, F. Biral, K. Yokoyama, and T. Shimono, “Including a musculoskeletal model in the control loop of an assistive robot for the design of optimal target forces,” in Proc. 45th Annu. Conf. IEEE Industrial Electronics Society, Lisbon, Portugal, 2019, pp. 5394–5400.
    [92]
    M. E. Walker, H. Hedayati, and D. Szafir, “Robot teleoperation with augmented reality virtual surrogates,” in Proc. 14th ACM/IEEE Int. Conf. Human-Robot Interaction, Daegu, Korea (South), 2019, pp. 202–210.
    [93]
    A. Hermann, F. Mauch, K. Fischnaller, S. Klemm, A. Roennau, and R. Dillmann, “Anticipate your surroundings: Predictive collision detection between dynamic obstacles and planned robot trajectories on the GPU,” in Proc. European Conf. Mobile Robots, Lincoln, UK, 2015, pp. 1–8.
    [94]
    J. Fisac, A. Bajcsy, S. Herbert, D. Fridovich-Keil, S. Wang, C. Tomlin, and A. Dragan, “Probabilistically safe robot planning with confidence-based human predictions,” in Proc. Robotics: Science and Systems, Pittsburgh, Pennsylvania, 2018.
    [95]
    P. Walker, S. Nunnally, M. Lewis, A. Kolling, N. Chakraborty, and K. Sycara, “Neglect benevolence in human-swarm interaction with communication latency,” in Proc. Int. Conf. Swarm, Evolutionary, and Memetic Computing. Berlin, Heidelberg: Springer, 2012, pp. 662–669.
    [96]
    P. Walker, S. Nunnally, M. Lewis, A. Kolling, N. Chakraborty, and K. Sycara, “Neglect benevolence in human control of swarms in the presence of latency,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Seoul, South Korea, 2012, pp. 3009–3014.
    [97]
    Z. C. Lipton, “The mythos of model interpretability,” in Proc. ICML Workshop Human Interpretability Machine Learning, New York, 2016, pp. 96–100.
    [98]
    S. B. Hale, “Overview of the field of interpreting and main theoretical concepts,” in Community Interpreting. London: Palgrave Macmillan, 2007, pp. 3–33.
    [99]
    Y. K. Ma, H. Y. Peng, T. Khan, E. Cambria, and A. Hussain, “Sentic LSTM: A hybrid network for targeted aspect-based sentiment analysis,” Cogn. Comput., vol. 10, pp. 639–650, Aug. 2018. doi: 10.1007/s12559-018-9549-x
    [100]
    X. Sun, X. Q. Peng, and S. Ding, “Emotional human-machine conversation generation based on long short-term memory,” Cogn. Comput., vol. 10, pp. 389–397, Jun. 2018. doi: 10.1007/s12559-017-9539-4
    [101]
    M. Friedman, “Explanation and scientific understanding,” J. Philos., vol. 71, no. 1, pp. 5–19, Jan. 1974. doi: 10.2307/2024924
    [102]
    P. N. Johnson-Laird and M. Ragni, “Possibilities as the foundation of reasoning,” Cognition, vol. 193, Article No. 103950, Dec. 2019. doi: 10.1016/j.cognition.2019.04.019
    [103]
    A. Anderson, J. Dodge, A. Sadarangani, Z. Juozapaitis, E. Newman, J. Irvine, S. Chattopadhyay, M. Olson, A. Fern, and M. Burnett, “Mental models of mere mortals with explanations of reinforcement learning,” ACM Trans. Interact. Intell. Syst., vol. 10, no. 2, Article No. Article No. 15, May 2020.
    [104]
    N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” arXiv preprint arXiv: 1908.09635, 2019.
    [105]
    P. Langley, B. Meadows, M. Sridharan, and D. Choi, “Explainable agency for intelligent autonomous systems,” in Proc. AAAI, San Francisco, CA, USA, 2017.
    [106]
    D. V. Carvalho, E. M. Pereira, and J. S. Cardoso, “Machine learning interpretability: A survey on methods and metrics,” Electronics, vol. 8, no. 8, Article No. 832, Jul. 2019. doi: 10.3390/electronics8080832
    [107]
    S. Mohseni, N. Zarei, and E. D. Ragan, “A multidisciplinary survey and framework for design and evaluation of explainable AI systems,” arXiv preprint arXiv: 1811.11839, 2020.
    [108]
    “Automation as an Intelligent Teammate: Social Psychological Implications,” University of Twente.
    [109]
    P. Walker, M. Lewis, and K. Sycara, “The effect of display type on operator prediction of future swarm states,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Budapest, Hungary, 2016, pp. 2521–2526.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)  / Tables(1)

    Article Metrics

    Article views (1850) PDF downloads(109) Cited by()

    Highlights

    • Propose a Human-Swarm-Teaming Transparency and Trust Architecture, HST3.
    • HST3-Architecture reinforces transparency as a key contributor towards situation awareness.
    • Assert that transparency is the overarching concept, comprising of three subordinate tenants.
    • Define the key sub-tenants of interpretability, explainability, and predictability.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return