描述:
研究了采用基于agent的建模与仿真(agent based modeling and simulation,ABMS)进行飞机拦截作战效能评估的方法。针对拦截作战的特点,提出了一种基于信息过程、以任务状态机为核心的agent模型结构。在此结构下,提出了信息域映射agent的概念以及基于此概念进行传感器探测与跟踪建模的方法,并给出了雷达探测与跟踪的通用模型。针对两类典型拦截作战情形进行了仿真和参数影响分析,验证了方法的合理性和有效性。分析结果表明,在单机作战、多机平台中心战和多机网络化作战三种不同作战模式下,飞机性能对效能的影响方式存在一定差异。
描述:
研究了采用基于agent的建模与仿真(agent based modeling and simulation,ABMS)进行飞机拦截作战效能评估的方法。针对拦截作战的特点,提出了一种基于信息过程、以任务状态机为核心的agent模型结构。在此结构下,提出了信息域映射agent的概念以及基于此概念进行传感器探测与跟踪建模的方法,并给出了雷达探测与跟踪的通用模型。针对两类典型拦截作战情形进行了仿真和参数影响分析,验证了方法的合理性和有效性。分析结果表明,在单机作战、多机平台中心战和多机网络化作战三种不同作战模式下,飞机性能对效能的影响方式存在一定差异。
描述:
Aerial base stations (AeBS), as crucial components of air-ground integrated networks, can serve as the edge nodes to provide flexible services to ground users. Optimizing the deployment of multiple AeBSs to maximize system energy efficiency is currently a prominent and actively researched topic in the AeBS-assisted edge-cloud computing network. In this paper, we deploy AeBSs using multi-agent deep reinforcement learning (MADRL). We describe the multi-AeBS deployment challenge as a decentralized partially observable Markov decision process (Dec-POMDP), taking into consideration the constrained observation range of AeBSs. The hypergraph convolution mix deep deterministic policy gradient (HCMIX-DDPG) algorithm is designed to maximize the system energy efficiency. The proposed algorithm uses the value decomposition framework to solve the lazy agent problem, and hypergraph convolutional (HGCN) network is introduced to strengthen the cooperative relationship between agents. Simulation results show that the suggested HCMIX-DDPG algorithm outperforms alternative baseline algorithms in the multi-AeBS deployment scenario.
描述:
Aerial base stations (AeBS), as crucial components of air-ground integrated networks, can serve as the edge nodes to provide flexible services to ground users. Optimizing the deployment of multiple AeBSs to maximize system energy efficiency is currently a prominent and actively researched topic in the AeBS-assisted edge-cloud computing network. In this paper, we deploy AeBSs using multi-agent deep reinforcement learning (MADRL). We describe the multi-AeBS deployment challenge as a decentralized partially observable Markov decision process (Dec-POMDP), taking into consideration the constrained observation range of AeBSs. The hypergraph convolution mix deep deterministic policy gradient (HCMIX-DDPG) algorithm is designed to maximize the system energy efficiency. The proposed algorithm uses the value decomposition framework to solve the lazy agent problem, and hypergraph convolutional (HGCN) network is introduced to strengthen the cooperative relationship between agents. Simulation results show that the suggested HCMIX-DDPG algorithm outperforms alternative baseline algorithms in the multi-AeBS deployment scenario.