【摘 要】
:
In order to solve the problems of poor portability,complex implemen-tation,and low efficiency in the traditional parameter training of the Belief rule-base,an artificial bee colony algorithm combined
【机 构】
:
College of Mathematics and Computer Science,Fuzhou University
【出 处】
:
第六届中国计算机学会大数据学术会议
论文部分内容阅读
In order to solve the problems of poor portability,complex implemen-tation,and low efficiency in the traditional parameter training of the Belief rule-base,an artificial bee colony algorithm combined with Gaussian disturbance op-timization was introduced,and a novel Belief rule-base parameter training method was proposed.By the light of the algorithm principle of the artificial bee colony,the honey bee colony search formula and the cross-border processing method were improved,and the Gaussian disturbance was employed to prevent the search from falling into a local optimum.The parameter training was imple-mented in combination with the constraint conditions of the Belief rule-base.By fitting the multi-peak function and the leakage detection experiment of oil pipe-lines,the experimental error were compared with the traditional and existing pa-rameter training methods to verify its effectiveness.
其他文献
Sleep staging has attracted significant attention as a critical step in auxiliary diagnosis of sleep disease.To avoid subjectivity in the process of doctors manual sleep staging,and to realize scienti
Many software projects use bug tracking systems to collect and allocate the bug reports,but the priority assignment tasks become difficult to be completed because of the increasing bug reports.In orde
Network embedding is a very important task to represent the high-dimensional network in a low-dimensional vector space,which aims to capture and preserve the network structure.Most existing network em
在监督学习中,标签噪声对模型建立有较大的影响.目前对于标签噪声的处理方法主要有基于模型预测的过滤方法和鲁棒性建模方法,然而这些方法要么过滤效果不好,要么过滤效率不高.针对该问题,本文提出一种基于数据分布的标签噪声过滤方法.首先对于数据集中的每一个样本,根据其近邻内样本的分布,将其及邻域样本形成的区域划分为高密度区域和低密度区域,然后针对不同的区域采用不同的噪声过滤规则进行过滤.与已有方法相比,本文
由工业设备产生、采集和处理的数据大多是时间序列、空间序列、高维矩阵等非结构化数据.目前单机分析环境如R、Matlab等提供了优质丰富的算法库,但随着数据生成速度和规模的不断升级,上述工具在处理大规模序列和矩阵运算时呈现低效甚至失效的现象.针对可处理数据规模和算法可移植性问题,本文设计了一种大规模时间序列分析框架LTSAF(Large-scale Time Series Analysis Frame
微分进化(DE)是一种基于种群的简单有效的全局优化方法,已在多目标优化领域得到了广泛关注.本文提出一种基于极大极小关联密度的多目标微分进化(MODEMCD)算法.新算法定义了极大极小关联密度,在严格遵守Pareto支配规则基础上,给出了基于极大极小关联密度的外部档案集维护方法,从而避免或减少最终解集的多样性损失.此外,设计了一种自适应选择策略,该策略通过评价个体的关联密度来指导个体优劣的选择过程,
Approximations based on random Fourier features have recently emerged as an efficient and elegant methodology for designing large-scale machine learning task.Unlike approaches used by the Nystr(o)m me
Fault diagnosis techniques based on probabilistic graphical models are often used for uncertain information reasoning.Among them,Bayesian network,an effective tool which has strong characteristics of
将豆瓣短评内容作为分析样本,从用户在线评论数据中挖掘用户喜好,探索适用于中国动漫品牌个性维度研究中各维度权重大小的评价方法,以助于中国动漫企业发现品牌个性维度构建中的不足之处.首先以前人构建好的中国本土品牌个性维度模型“仁、智、勇、乐、雅”作为研究基础,通过《同义词词林》词典对基础特征词进行拓展.其次对样本进行数据预处理,各维度对应的特征词语词频统计与归一化处理,然后运用熵权法计算各品牌个性维度的
网络空间中具有纷繁复杂的多种态势要素、要素属性,以及要素之间的错综关系.对这些信息能否清晰准确地分析并描述,直接关系到所建立的网络空间可视化模型的准确性、完备性、有效性.本文采用知识表示方法,对网络空间中的关键态势信息要素进行描述,主要研究内容包括以下三个方面.首先分析了网络空间态势信息知识的特点,提出了对网络空间态势信息进行知识表示的重要作用.其次研究了基于本体的知识表示理论,分析了采用本体表示