with the development of computer science,sensor technique,communication technology and control technology,more and more new complicated engineering systems are created,and provide services for human beings. In the outer space,there are more and more satellites and spaceships which can provide communication services and conditions for scientific experiments. In the underwater area,underwater robots do experiments for geological exploration.
For a newly built engineering system,an evaluation system over massive data generated from test and validation experiments is critical for assessing the performance,such as effectiveness,safety,etc. For a complicated system,it usually contains sensors,computing units and actuators. The data used by the evaluation system for assessment is usually collected from sensors during experiments. Generally,the structure of an engineering system can be described as a cyber-physical system (CPS) shown in Fig. 1[1].
Download:
|
|
Fig. 1. The structure of a typical engineering system. |
As the complicated systems are showing more and more features of distribution,heterogeneity,autonomy and mass data,the intelligent evaluation system should have the following features[2, 3].
1) Distributed: In a complicated system,the sensors are usually distributed,which should be supported by the evaluation system. One good choice is the Internet. Thus,users of the evaluation system could be everywhere.
2) Universal: In general,each evaluation system is only designed for one specific system,which may be a waste. To be more intelligent,the framework of evaluation system should be universal for different systems.
3) Interactive: The evaluation usually involves different parts which supply the subsystems of a complicated system. Thus,the evaluation system should be interactive and configurable for users.
4) Real-time: For some special uses,the evaluation system must support on-line evaluation and give the feedback information in time.
5) Mass-data processing: The more complicated the system is,the more data it has. Thus,the evaluation system needs to support mass-data processing.
Considering the above requirements,the evaluation framework should not only support the mass-data processing and real-time behavior,but also be distributed and interactive. However,as shown in Section II,the typical existing evaluation frameworks usually fail to cover all the features,especially the interaction capability,which is the main feature of our newly designed complicated systems.
This paper describes an intelligent evaluation framework based on multi-agent technology,which can ensure the above features effectively. The framework includes two aspects. One is how to divide the system into different agents. The other is how the agents cooperate with each other and fulfill the whole evaluation function through a data platform.
The rest of the paper is organized as follows. Section II gives the description of typical evaluation frameworks. In Section III,we propose the evaluation framework based on multi-agent technology. Section IV gives two case studies for the indoor comfort system and the technology assessment. Section V concludes this paper.
II. THE EXISTING EVALUATION FRAMEWORKSIn this section,we describe three basic evaluation frameworks which are extensively used in different domains. However,they are usually centralized and fixed for special systems,which may be unintelligent.
A. Serial Evaluation FrameworkSerial evaluation framework is a typical evaluation framework,on which many engineering evaluation systems are built. The framework is shown in Fig. 2. The system consists of serial logical processes (LP). The output of the previous logical process will be the input of the next process. It is simple and easy to understand. However,when the object becomes complicated,it is hard to evaluate it using this framework[4].
Download:
|
|
Fig. 2. The structure of serial evaluation. |
The framework of parallel evaluation is shown in Fig. 3. The main idea is to divide the complicated evaluation system into different logical processes. When the evaluation starts,each logical process will work asynchronously and assess the parameters of its own in the case that each logical process is independent. Compared to the traditional serial evaluation,the parallel evaluation has a faster computing speed. However,the result of one logical process usually influences other processes[5]. Then,logical processes must be synchronous to make sure the whole system works normally. Thus,synchronizing is the main difficulty to establish such framework.
Download:
|
|
Fig. 3. The structure of parallel evaluation. |
The synchronous mechanism is classified into two types,conservative and optimistic. In the conservative mechanism,the causality between the logical processes must be ensured to be correct. If one logical process needs to contact with another,the local simulation time should be synchronized. Otherwise,the logical process which cannot be influenced by the others will work asynchronously. In the optimistic mechanism,the causality of the interaction is not considered. Each logical process simulates in accordance with its local event list. When the causality error of the logical process occurs,the states of the system return to the place before the error occurred. On the basis of these two mechanisms,a hybrid mechanism is proposed,which combines the properties of both mechanisms. Another self-adapting way is to change its mechanism due to the states of the system.
C. High Level Architecture Evaluation FrameworkIn the high level architecture (HLA),the whole evaluation framework is considered as the federation which includes different logical processes. The architecture is shown in Fig. 4. Different logical processes are independent but united by a universally and relatively independent operational support system,which includes run time infrastructure (RTI) and underlying communication system. It not only provides the underlying communication services,but also takes the charge of management authority. The architecture separates the underlying support environment and the application layer. Thus,it can achieve flexible combination and configuration according to the needs of different users and purposes[6].
Download:
|
|
Fig. 4. The structure of HLA. |
In this section,we propose our evaluation framework based on multi-agent technology. First,we describe our agent-based modeling method. Then,the method is used to model the evaluation framework. The first step is to divide the system into different agents,and the second step is how to make the agents cooperate with each other and fulfill the whole evaluation through a data platform. Last,we list the advantages of the evaluation framework and analyze the efficiency of evaluation.
A. Agent-based Modeling MethodThe system is considered as a set of different agents using the agent-based modeling. For a general agent,it has the following functions[7, 8, 9, 10].
1) Autonomy: The agent can control its states and behaviors automatically without the influence of other programs.
2) Response: The agent should have the ability to make timely and correct response to unpredictable events and environmental changes.
3) Communication: The agent is capable to exchange information and interact with other agents.
4) Planning and making decisions: After obtaining enough information,the agent is able to plan and make decisions to solve problems based on its knowledge.
5) Cooperation: The agents coordinate and cooperate to deal with complex problems and fulfill the whole mission.
Thus,we can conclude that one agent must have three basic functions[5, 11],communication,logic decision and the execution. The agents cooperate with each other in a special structure and organize the whole system. Considering all the aspects,we define the agent by two parts,the logical part and the data storage part as shown in Fig. 5.
Download:
|
|
Fig. 5. The agent consisting of two parts. |
The logical part,whose function is similar to the human head,takes the charge of communicating with the other agents and performs logical process of input data. The processed data is stored in the data storage part,similar to the human body,which provides input data for other agents. Furthermore,we define the following two basic rules for data communication,which gurantee the features of the agents,especially for autonomy.
1) The processed data of the logical part can only be stored in the data storage part of its own agent.
2) The data of the data storage part can only be renewed by the logical part of its own agent.
After we have defined different agents with the functions of the logic part and the data type of data storage part,the most important thing is to make the agents communicate and cooperate with each other in order to organize the whole system.
In the traditional communication ways,the information exchanged between different objects involves control policy,which may increase complexity and instability of the system. To avoid this,we define the third rule of agent.
3) The information exchanged between the agents only consists of the data without control policy or semantics.
The above three rules make sure the agents can work autonomously and communicate with each other regularly. The next procedure is to build a suitable structure for the different target systems. The structure is mainly separated into two sub-structures which are the open-loop structure shown in Fig. 6 and the close-loop structure shown in Fig. 7.
Download:
|
|
Fig. 6. The open-loop structure of agent-based framework. |
Download:
|
|
Fig. 7. The close-loop structure of agent-based framework. |
In the open-loop structure,the agents communicate sequentially. The former agent$'$s data is read by the latter and becomes the latter$'$s control signal. Thus,they fulfill the open-loop control structure. But in the close-loop control structure,they work with feedback data. For example,in Fig. 7,we can consider agent Y as the controller,while agent Z is the controlled object and agent X is the sensor. The controller performs feedback control by using the error between the pre-set points of controlled object and the measured values from the sensor agent.
Since the agents need to communicate with each other continuously,a data platform supporting mass-data exchange is needed for the system,which should have the ability of not only storing mass data but also supporting frequent reading and writing. So database is a good choice for such a data platform. It also helps to manage time of the system.
Another factor which should be considered is the scheduling policy. It determines the control logic and clock mechanism. Currently,there exist two scheduling policies,based on the time slice and the discrete event,respectively. In general,the policy of discrete event is considered to be more efficient than the policy of time slice in the case that the event list is easy to schedule. However,when the system becomes more complex,the policy of the time slice is more suitable[12].
B. Evaluation Framework Based On Multi-agent TechnologyTo build a universal evaluation framework based on agents,we need to analyze the basic functions and needs for evaluation.
First,it is about the input data of evaluation. For different targets of evaluation,the input data can be classified into quantitative and qualitative data. But for some complex targets,the two kinds of data are usually combined as shown in our case studies in Section IV. The quantitative data may come from the real data or the simulation data,and the qualitative data could be the expert$'$s judgment. Secondly,different users are likely to care about different indices of evaluation result. As the designer of target system,they care more about reliability,while the users of the system may care more about energy cost and comfort. Even for different designers,what they care about may be different. Thus,the data processing of evaluation must be intelligent to deal with different kinds of datum and needs. At last,the output of evaluation should also be flexible to meet different needs. Considering all the above issues,we propose our evaluation framework based on multi-agent technology.
1) Dividing the evaluation system into different agents:After analyzing the functions and needs mentioned above for evaluation,we come up with the evaluation framework based on multi-agent technology. Four kinds of agents are defined as data-interface agent,data-processing agent,user agent and result output agent. The input,output and function of each agent are described as follows.
For data-interface agent,it gets different data from the target,and puts the processed data in the data storage part. Its main function is to supply available and standard initial data for evaluation. Thus,the logical part not only has the function of dealing with different kinds of data,but also can support standard initial data. The input data may be the on-line data from the sensors or off-line data files for different purposes. The data storage part takes the charge of managing and storing the processed data. They can be read by other agents when needed. The structure of data-interface agent is shown in Fig. 8.
Download:
|
|
Fig. 8. The structure of data-interface agent. |
The user agent is mainly to supply evaluation needs and choices,such as the evaluation metrics and methods. It is an interactive part of the evaluation framework,which can supply personalized services for different users. Users can also adjust the configuration data after they see the evaluation result. Thus,it can support feedback evaluation. The structure of user agent is shown in Fig. 9.
Download:
|
|
Fig. 9. The structure of user agent. |
The data-processing agent is the key agent in the system. First,the logical part of this agent gets standard initial data from the data-interface agent and configuration data from the user agent. Then,it does the evaluation according to them. Finally,the data-processing result will be stored in the data storage part. Besides,the evaluation result of each step is stored for diagnosis. The structure of data-processing agent is shown in Fig. 10.
Download:
|
|
Fig. 10. The structure of data-processing agent. |
The result output agent is to supply the output of evaluation result in different ways. It gets standard initial data from the data-interface agent and data evaluation result from the data-processing agent. For example,standard initial data can be used to draw curves with the related evaluation result to show the performance of evaluation result. It can be saved as an evaluation report in the data storage part. The structure of result output agent is shown in Fig. 11.
Download:
|
|
Fig. 11. The structure of result output agent. |
2) Organizing the whole evaluation system based on multiagent technology:After modeling of each agent,the whole framework of evaluation system can be shown in Fig. 12.
Download:
|
|
Fig. 12. The framework of evaluation system. |
In the evaluation framework,initial data gets into the data-interface agent and is transformed into standard data,the user agent generates configuration data according to the evaluation needs,choices and feedbacks. The data-processing agent does the evaluation based on standard data and configuration data. After evaluation,the result output agent generates output of evaluation. In addition,the evaluation framework supports both on-line and off-line evaluation.
C. Advantages and Efficiency Analysis of Evaluation Framework1) Advantages of evaluation framework:Although the evaluation framework is interactive and universal,the function of each agent is specific. They cooperate with each other and fulfill the system-level evaluation tasks. Since the evaluation need and choice are configurable,the evaluation framework can supply personalized service,which increases the user$'$s participation. Thus,the evaluation is more suitable for complex and multi-functional system,whose evaluation is more sensitive to the configuration data.
In a pratical evaluation project,the evaluation is likely to be stratified. Thus,the users can also be classified into different levels. Some users can only know the evaluation result,but some other high-level users have the right to supply the evaluation need and change the evaluation parameters. Besides,by using database,the system can easily be built on the Internet,which makes the evaluation more distributed.
For some uncertain and unpredictable evaluation cases,it is suitable to do the evaluation by the proposed agent-based framework. By generating different simulation data and configuring evaluation method and metrics,we can do the evaluation prediction and adjust the evaluation method and metrics. Furthermore,by adding the simulation model to supply initial data,the whole system becomes an evaluation platform,which is flexible for various usages[13].
2) Efficiency analysis:To demonstrate efficiency of the framework,we establish the following evaluation model to compare our framework with the traditional evaluation frameworks.
First,it is efficiency of data storage. Assuming we need to do the evaluation for an experiment,after the experiment,we get the initial input data $\tilde X$ which includes data ${\tilde x_1},{\tilde x_2},\cdots,{\tilde x_N}$. Here,${\tilde x_i}$ is the output of sensor $i$. To do the evaluation,the initial data needs to be transformed into standard data ${x_1},{x_2},\cdots,{x_N}$. The processing time for ${\tilde x_i}$ is $T_i^{ini}$. We also do the evaluation for $K$ times in order to cover different evaluation metrics. The index of initial data$'$s processing time for one time evaluation ${T_{ini}}$ can be calculated as follows. For the agent-based evaluation framework,
$\begin{align} {T_{ini}}(agent) = \sum\limits_{i = 1}^N {T_i^{ini}} /K. \end{align}$ | (1) |
But for the traditional evaluation systems without data storage for standard data,more processing time is required,since
$\begin{align} {T_{ini}}(tradition) = \sum\limits_{i = 1}^N {T_i^{ini}}. \end{align}$ | (2) |
Furthermore,we compare the evaluation processing efficiency. For the above experiment,there are $M$ performance metrics which can be assessed. The evaluation time for each performance index is $T_i^{index},{\kern 1pt} {\kern 1pt} i = 1,2,\cdots,M$. In the traditional evaluation systems,the processing time for the experiment can be calculated as
$\begin{align} {T_{exp} }(tradition) = \sum\limits_{i = 1}^N {T_i^{ini}} + \sum\limits_{i = 1}^M {T_i^{index}}. \end{align}$ | (3) |
But for some specific need,we only evaluate part of performance metrics. By configuring data storage of the user agent,the processing time of agent-based evaluation can be reduced. ${I_i}$ denotes the choosing coefficient for index $i$.
$\begin{align} {T_{exp }}(agent) = \sum\limits_{i = 1}^N {T_i^{ini}/K} + \sum\limits_{i = 1}^M {{I_i} \cdot T_i^{index}}, \end{align}$ | (4) |
$\begin{align} {I_i} = \left\{ {\begin{array}{*{20}{l}} {1,\qquad {\rm chosen,}}\\ {0,\qquad {\rm otherwise.}} \end{array}} \right. \end{align}$ | (5) |
When the indices are chosen randomly,we can use a probability model to derive a general result of the expected processing time. For evaluating $K$ times,if the possibility of index $i$ being chosen is ${P_i}$,we can get the probability result as follows:
$\begin{align} {\rm E}({I_i}) = {P_i}, \end{align}$ | (6) |
$\begin{align} \begin{array}{l} {{\bar T}_{exp }}(agent) ={\rm E}(\sum\limits_{i = 1}^N {T_i^{ini}/K} + \sum\limits_{i = 1}^M {{I_i} \cdot T_i^{index}} )=\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^N {T_i^{ini}/K} + \sum\limits_{i = 1}^M {{\rm E}({I_i}) \cdot T_i^{index}} =\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sum\limits_{i = 1}^N {T_i^{ini}/K} + \sum\limits_{i = 1}^M {{P_i} \cdot T_i^{index}}. \end{array} \end{align}$ | (7) |
It should be pointed out that our algorithm has the same space complexity as the traditional algorithms although it reduces the computation time. The trick is that we store processed data and configuration data instead of creating them repeatedly.
Besides,it is usually necessary to establish new evaluation systems for different target systems or special evaluation need and choice for traditional evaluation systems. But for the agent-based evaluation framework,we just need to reconfigure the system by changing only some specific agents to meet such requirements.
The four agents can also be distributed in four computers,and communicate with each other through data storage. The parallel mechanism makes the evaluation more efficient than the traditional systems which have to run on a single computer.
Last,the proposed framework has higher efficiency for diagnosis. For the data-processing agent,the evaluation result of each step is stored. Thus,it is more efficient to locate evaluation error by checking the intermediate data compared with serial evaluations.
IV. CASE STUDIESIn this section,we give two evaluation case studies. One case is to assess indoor comfort system by the index of performance,energy and satisfaction. The other case is a typical technology assessment in spacecraft system.
A. Assessment of Indoor Comfort SystemIn order to validate the evaluation framework proposed in this paper,we do a case study on indoor comfort control system as shown in Fig. 13,which is a kind of intelligent air conditioning system. The result shows that evaluation framework can work effectively and efficiently. The details of the whole indoor comfort control system are given below. The system contains 6 different parts,i.e.,human subjects (H),rooms (R),sensors (S),communication networks (CN),computing unit (CU),and control actuator (CA). The human subjects provide their complaints about the room environment such as hot and cold. Sensors provide measurements of environment parameters,such as air temperature,humidity and CO2 level. These inputs are collected by the system through communication networks,and are processed by the computing unit according to various control laws. The control actions,such as changing the pre-set points of environment parameters (for example,room temperature),are sent to control actuators through communication networks again to close the loop. Our purpose is to evaluate performance,energy of the control system and satisfaction level of human.
Download:
|
|
Fig. 13. The structure of the whole indoor comfort system. |
Our agent-based evaluation system is established according to Fig. 12. For the data-interface agent,the input data is sensor measurement,including complaint data,room parameters,etc. For the user agent,the input is evaluation requirements such as performance,energy or satisfaction level configured by user of the evaluation system. The data-processing agent calculates evaluation result according to evaluation needs provided by the data-interface agent and the user agent. The result output agent generates evaluation result in different ways.
Download:
|
|
Fig. 14. Performance of the indoor comfort system in one day. |
The performance evaluation is mainly based on the curve of the parameters which is shown for one day in Fig. 14.
To evaluate the energy of the system,we can use the professional software Dest[14] to calculate the energy of the system in the data-processing agent. For the satisfaction evaluation,reciprocal of the number of total complaints can be the index of satisfaction which is shown in TableI.
We also carry out a case study of assessing the technology and experiments in spacecraft systems.
In a spacecraft system,it is important and usual to assess technologies by doing experiments. For simple and direct experiments,it is easy to do assessment from sensor data. But when the experiments become complex,it is hard to do it. A complex experiment involves even hundreds of sensors,and a technology may include several complex experiments. In general,technologies and experiments are coupled with each other. We can use the proposed agent-based framework to do such technology assessment.
For a specific technology,it consists of $M$ experiments and each experiment includes $N$ experiment metrics. First,each experiment is assessed for caculating precision ${}^n{{{P}}_E}$ and accessibility parameter ${}^n{{A}}{{{c}}_E}$ as
$\begin{align} {}^ne = |{}^{real}D-{}^{exp }D|, \end{align} $ | (8) |
$\begin{align} {}^np = \frac{{{}^nE-{}^ne}}{{{}^nE}}, \end{align}$ | (9) |
$\begin{align} {}^m{{{P}}_E} = \sum\limits_N {{\eta _n} \cdot \max ({}^np,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0)}, \end{align}$ | (10) |
$\begin{align} {}^m{{A}}{{{c}}_E}{\rm{ = }}\left\{ \begin{array}{l} 1{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \exists n,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {}^np \ge 0,\\ 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\rm otherwise}, \end{array} \right. \end{align}$ | (11) |
where ${}^{real}D$ and ${}^{exp }D$ stand for real value and expected value,respectively,${}^np$ is precision of experiment index,${}^nE$ is standard error and ${}^ne$ is real error,${\eta _n}$ is the relative weight of index to experiment. Then,the result of technology can be assessed by precision ${{{P}}_T}$,accessibility parameter ${{A}}{{{c}}_T}$ and availability ${{A}}{{{v}}_T}$ as
$\begin{align} {{{P}}_T} = \sum\limits_M {{}^m{{{P}}_T}} \cdot {\omega _m}, \end{align}$ | (12) |
$\begin{align} {{A}}{{{c}}_T} = \max ({}^1{{A}}{{{c}}_E},{}^2{{A}}{{{c}}_E},\cdots,{}^M{{A}}{{{c}}_E}), \end{align}$ | (13) |
$\begin{align} {{A}}{{{v}}_T} = \sum\limits_M {{}^m{{{P}}_T} \cdot {}^m{{A}}{{\rm{c}}_E}} \cdot {\omega _m}, \end{align}$ | (14) |
where ${\omega _m}$ is the relative weight of experiment to technology.
Next,we establish the evaluation framework in this case. For the data-interface agent,it gets experiment data from sensors,uniforms the data format and stores the processed data $({}^{real}D,{}^{exp }D)$ in data storage. The user agent needs to configure standard error ${}^nE$ for each index,weight ${\eta _n}$ for experiment and weight ${\omega _m}$ for technology. The data-processing agent will use and configure the data to calculate the result of technologies. Besides,every result of index and experiment will be stored in the storage part. The result output agent shows the sensors$'$ data from the data-interface agent and the related result from the data-processing agent in different ways.
V. CONCLUSIONIn this paper,we review three traditional frameworks for evaluation,and presents their disadvantages when applied to complex objects. Then,we describe the agent-based method and propose our evaluation framework based on agents. Finally,two case studies have been done to demonstrate effectiveness of the proposed framework.
[1] | Wen Jing-Rong, Wu Mu-Qing, Mu Jing-Fang. Cyber-physical system. Acta Autamatica Sinica, 2012, 38(4):507-517(in Chinese) |
[2] | Derler P, Lee E A, Vincentelli A S. Vincentelli A S. Modeling cyberphysical systems. Proceedings of the IEEE, 2012, 100(1):13-28 |
[3] | Lee E. Cyber-physical systems:design challenges. In:Proceedings of the 11th IEEE International Symposium on Object Component Oriented Real Time Distributed Computing. Orlando, USA:IEEE, 2008.363-369 |
[4] | Blackstock K L, Kelly G J, Horsey B L. Developing and applying a framework to evaluate participatory research for sustainability. Ecological economics, 2007, 60(4):726-742 |
[5] | Xiao Tian-Yuan, Fan Wen-Hui. Introduction to System Simulation. Beijing:Tsinghua University Press, 2009(in Chinese) |
[6] | Zhou Yan, Dai Jian-Wei. HLA Simulation Program Design. Beijing:Electronic Industry Press, 2002. 11-15(in Chinese) |
[7] | Girardi R, Marinho L B, Oliveira I. R. A system of agent-based software patterns for user modeling based on usage mining. Interacting with Computers, 2005, 17(5):567-591 |
[8] | Ginot V, Page C L, Souissi S. A multi-agents architecture to enhance end-user individual-based modelling. Ecological modeling, 2002, 157(1):23-41 |
[9] | Khouja M, Hadzikadic M, Zaffar M A. An agent based modeling approach for determining optimal price-rebase schemes. Simulation Modeling Practice and Theory, 2008, 16:111-126 |
[10] | Smajgl A, Izquierdo L R, Huigen M. Rules, knowledge and complexity:how agents shape their institutional environment. Journal of Modeling and Simulation of System, 2010, 1(2):98-107 |
[11] | Ni Jian-Jun. Complex Systems Modeling and Control of Multi-agent Theory and Application. Beijing:Electronic Industry Press, 2011. 26-28(in Chinese) |
[12] | Yan Chao-Bo, Lai Hua-Gui, Zhao Qian-Chuan. A framework of parallel simulation for multi-agent systems. Journal of System Simulation, 2010, 22(1):191-195(in Chinese) |
[13] | Anderson J, Evans M. Intelligent agent modeling for natural resource management. Mathematical and computer Modelling, 1994, 20(8):109-119 |
[14] | Jiang Y. Building Environmental System Simulation and Analysis-Dest. Beijing:China Building Industry Press, 2006. 26-46(in Chinese) |