Hostname: page-component-7bb8b95d7b-pwrkn Total loading time: 0 Render date: 2024-09-27T03:20:01.198Z Has data issue: false hasContentIssue false

A knowledge transfer method for human-robot collaborative disassembly of end-of-life power batteries based on augmented reality

Published online by Cambridge University Press:  13 September 2024

Jie Li
Affiliation:
College of Mechanical Engineering, Donghua University, Shanghai, China
Liangliang Duan*
Affiliation:
College of Mechanical Engineering, Donghua University, Shanghai, China
Weibin Qu
Affiliation:
College of Mechanical Engineering, Donghua University, Shanghai, China
Hangbin Zheng
Affiliation:
College of Mechanical Engineering, Donghua University, Shanghai, China
*
Corresponding author: Liangliang Duan; Email: qingxixiliu@163.com
Rights & Permissions [Opens in a new window]

Abstract

The disassembly of power batteries poses significant challenges due to their complex sources, diverse types, variations in design and manufacturing processes, and diverse service conditions. Human memory capacity and robot cognitive and understanding capabilities are limited when faced with different dismantling tasks for end-of-life power batteries. Insufficient human-computer interaction capabilities greatly hinder the efficiency of human-robot collaboration (HRC) operations. The existing HRC relies heavily on the experience of operators, while the existing disassembly system fails to update new disassembly strategies in real time when facing new battery varieties. Therefore, this paper proposes an augmented reality-assisted human-robot collaboration (AR-HRC) power battery dismantling system based on transfer learning. It consists of three modules: AR-HRC knowledge modeling, dismantling subgraph similarity assessment, and strategy transfer update. The AR-HRC knowledge modeling module aims to establish an intelligent mapping from tasks to collaborative strategies based on part features. Based on the evaluation of task similarity, the mobility assessment model divides subtasks into similar and dissimilar classes. For similar subtasks, the original dismantling strategy can be applied to the current task. However, for different subtasks, operators can issue instructions to the AR-HRC system through the human-computer interaction function of AR and develop new collaborative strategies based on actual conditions. Finally, a case study of power battery dismantling is conducted, and the results show that compared to traditional pre-programmed assembly, this system can improve dismantling efficiency and reduce cognitive burden.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Introduction

The improper recycling and disposal of heavy metals and organic electrolytes present in end-of-life power batteries pose a serious threat to the environment. There is an urgent need to investigate intelligent disassembly technologies to achieve large-scale and standardized management of automotive power battery recycling. These technologies aim to enhance the flexibility, reliability, and efficiency of power battery disassembly operations (Yu et al., Reference Yu, Zhang, Jiang, Yan, Wang and Zhou2022).

End-of-life power battery products exhibit various types, lengthy manufacturing processes, and complex operational conditions. The recycling and disassembly of these batteries need to be adaptable to the characteristics of multiple varieties, diverse end-of-life conditions, and inconsistent recycling volumes. This places significant pressure on the disassembly and recycling industry. Additionally, designing efficient and automated disassembly lines presents challenges due to different batches of end-of-life power batteries and the lack of historical disassembly experience and information regarding diverse component connections.

The human-robot collaboration (HRC) manufacturing model has gained significant attention as a popular research topic in recent years. It serves as a complex manufacturing system that integrates human intelligence with robotic automation capabilities, offering extensive application prospects in the assembly of complex products. However, the restricted cognitive and understanding abilities of robots for diverse disassembly tasks, coupled with inadequate human-computer interaction capabilities, significantly impede the efficiency of HRC operations. Consequently, it is imperative to tackle the challenges related to representing and computing heterogeneous information data in intricate disassembly processes. This requires conducting thorough research on self-awareness techniques and their associated mechanisms for collaborative robots, as well as exploring more efficient methods of human-computer interaction and understanding. By doing so, the goal of achieving HRC in complex disassembly scenarios can be accomplished. Currently, the majority of power battery recycling and disassembly enterprises still rely on manual or semi-automated disassembly methods, resulting in challenges such as non-standard disassembly practices, low levels of automation, and compromised safety measures. Through the progressive promotion and implementation of robotics technology, its integration into the disassembly of end-of-life power batteries will yield substantial benefits, including the enhancement of the working environment for workers, improved disassembly efficiency, and optimized material recovery outcomes.

The efficient and safe disassembly of retired power batteries, along with the subsequent classification and treatment of battery enclosures, screws, cables, battery cells, and related sensors, plays a crucial role in facilitating material recycling and batch reuse of end-of-life power batteries. Furthermore, the energy utilization efficiency achieved through this method would be superior to that of conventional complete disassembly and recycling approaches. Thus, investigating efficient disassembly approaches for power batteries with various types and diverse service conditions using intelligent disassembly technologies within the context of HRC would enhance the flexibility and adaptability of disassembly operations.

In summary, HRC has been applied in many fields, but there is relatively little research in the field of disassembly and recycling. Disassembly work is an experiential task, and the efficiency of disassembly is closely related to the experience of operators. Previous research has shown that operators require a longer training period. Retired power batteries have a large inventory, multiple models, and changes in retirement status. Therefore, large-scale disassembly and recycling need to meet the requirements of flexibility. To address the issues discussed above, this paper proposes a knowledge transfer (KT)-based reality-assisted human-robot collaboration (AR-HRC) power battery disassembly approach. The approach consists of three parts, including the development of the knowledge graph, disassembly subgraph similarity matching, and KT updating. The proposed approach contributes in the following ways: first, it introduces a KT-based AR-HRC system that caters to the flexible disassembly requirements of power batteries with various varieties and retirement conditions. Secondly, it proposes a KT-based AR-HRC disassembly sequence planning method that generates disassembly sequence KGs by modeling and separately considering static and dynamic data. In addition, a similarity matching method based on disassembly subgraphs is developed that encodes dynamic data, combines it with static data, and comprehensively assesses the procedure similarity. In the end, the AR-based KT updating approach is designed that incorporates the operator’s decisions with existing strategies through real-time interactive functionality of AR, resulting in the generation of new disassembly strategies based on the actual disassembly situation.

Literature review

Traditional HRC

HRC refers to the cooperative work between humans and robots, where they collaborate on specific tasks to achieve particular objectives. Recent advancements in modern technology, particularly in artificial intelligence and machine learning, have been reshaping the landscape of HRC. However, the majority of HRC tasks are currently performed by humans. Some research efforts have attempted to use structured programming to control robots for automated disassembly, but these approaches often overlook the highly dynamic disassembly environment and the uncertainties related to the historical service conditions of power batteries. Consequently, scaling up and applying such solutions on a wide scale becomes challenging (Parsa and Saadat, Reference Parsa and Saadat2021). To address these challenges, Lv et al. (Reference Lv, Zhang, Sun, Lu and Bao2021) proposed a novel framework for HRC assembly based on the digital twin technique, which enhances the efficiency and safety of power battery assembly. Additionally, Jiang et al. (Reference Jiang, Hsu and Zhu2022) constructed a structured digital twin model from data obtained from human-computer interactions, providing an effective tool for physical simulation and control. These research endeavors signify significant progress in the field of HRC and lay the foundation for further advancements in automated disassembly and other collaborative tasks.

Robot task planning has long been a prominent area of research in the field of robotics. It involves selecting a sequence of actions and constraint conditions that enable a robot to change its state and manipulate objects from an initial state to a desired target state. These planning methods are typically leverage artificial intelligence techniques and are customized and extended to address different task definitions and achieve diverse solutions. For instance, in complex multi-stage tasks, a hierarchical planning approach is adopted to break down the overall robot task into simpler tasks (Wöhlke, Reference Wöhlke1992). Researchers have also explored motion planning and decision-making in uncertain situations, such as real-time online planning and decision-making of intelligent robot behaviors in cluttered environments (Zeng et al., Reference Zeng, Song, Welker, Lee, Rodriguez and Funkhouser2018). However, despite the significant progress made in this area, there exists a common limitation—planners often require the representation of system state information in computable symbolic form, which necessitates pre-definition by experts in symbolic logic. This limitation greatly restricts the applicability of task-planning approaches. Additionally, many of these research studies are still confined to laboratory settings and have yet to be extensively deployed in engineering applications.

Traditional HRC approaches rely on separate models for designing HRC strategies and robot motion path planning. This lack of correlation between different HRC tasks poses challenges for adapting HRC systems to dynamic disassembly scenarios. Furthermore, HRC is often limited to a single mode, which hinders the flexibility required for disassembling different product varieties under diverse retirement conditions.

AR assisted HRC

Augmented reality (AR) is a technology that combines computer-generated virtual objects with real-world scenes, seamlessly integrating virtual elements into the user’s perception of the environment. This is achieved using cameras and computer graphics algorithms to overlay virtual objects onto real-world images or videos, creating an immersive and interactive experience for users (Cheng et al., Reference Cheng, Zhang, Bo, Chen and Zhang2020). In the context of manufacturing systems, AR-based software can be employed to enhance the interaction between mobile robots and operators. By connecting unmarked AR tools to robot controllers, natural human-computer interaction can be facilitated (Kousi et al., Reference Kousi, Stoubos, Gkournelos, Michalos and Makris2019).

Wei et al. proposed a distributed cognition-based AR-assisted collaborative assembly positioning method to address the challenge of sharing augmented assembly instructions among multiple operators (Fang et al., Reference Fang, Fan, Ji, Han, Xu, Zheng and Wang2022). To tackle the problem of mismatched pins in complex aviation connectors, Li et al. (Reference Li, Zheng and Zheng2020) combined AR with deep learning techniques to provide flexible and mobile automatic inspection services. Cheng et al. (Reference Cheng, Zhang, Bo, Chen and Zhang2020) proposed an AR dynamic image recognition technique based on deep learning algorithms for identifying dynamic images. Jia and Liu (Reference Jia and Liu2020) developed an AR-based collision detection system for simulating real glass collision detection, aiming to reduce experimental costs and risks associated with physical experiments. Palmarini et al. (Reference Palmarini, del Amo, Bertolino, Dini, Erkoyuncu, Roy and Farnsworth2018) conducted experiments demonstrating that users have higher levers in robots when using an AR interface compared to traditional graphical user interfaces (GUI). The AR interface provided real-time feedback and visual cues, enhancing the user’s perception of robot actions and intentions. AR systems can offer real-time feedback and guidance to operators during collaborative assembly tasks with robots. Information about the assembly process can be overlaid into the operator’s field of view using a head-mounted display, and communication between the operator and the robot can be facilitated through a voice command interface (Green et al., Reference Green, Chase, Chen and Billinghurst2010; Liu and Wang, Reference Liu and Wang2017). Chan et al. (Reference Chan, Hanks, Sakr, Zhang, Zuo, Van der Loos and Croft2022) employed an AR interface to set virtual robot motion trajectories and rendered virtual models of robots and workpieces onto real ones, providing visual indications of position calibration results and task context.

Traditional AR-HRC methods in the past have relied mainly on static 3D models, which involve combining virtual information with the real world to assist users in completing tasks, particularly in assembly scenarios. However, these traditional approaches encounter challenges in effectively leveraging prior knowledge and adapting to dynamic disassembly tasks in the presence of various product varieties and diverse retirement conditions.

HRC based on transfer knowledge

In recent years, there has been a growing focus on cognitive computing and cognitive intelligence in the field of HRC, with notable advancement. Cognitive processes can be divided into two main stages: perception and cognition. Through perceiving data and the environment in HRC scenarios, cognitive abilities such as learning, reasoning, and problem planning are continuously acquired, making at a forefront area of research in HRC.

Li et al. proposed a method that utilizes graph-matching networks to learn the similarity between graph-structured objects (Chan et al., Reference Chan, Hanks, Sakr, Zhang, Zuo, Van der Loos and Croft2022). This approach incorporates the attention mechanism to encode two graph structures into embedding, which are then inputted into an attention network to calculate the similarity score between them (Li et al., Reference Li, Gu, Dullien, Vinyals and Kohli2019). Qu et al. proposed an adaptive planning method based on the digital twin model for HRC disassembly of end-of-life lithium-ion batteries. This method considers the state and health of the battery, as well as the efficiency and safety of HRC. By considering the characteristics of the battery and the components that require disassembly, this approach can automatically adjust the disassembly strategy and collaboration method to optimize disassembly efficiency and reduce risks (Qu et al., Reference Qu, Li, Zhang, Liu and Bao2023).

The application of knowledge graphs (KG) has been prominent in the field of robot assembly and disassembly. Utilizing robot knowledge and knowledge-based task specifications in robot systems, as well as knowledge-based disassembly methods, can enhance the analysis of product disassembly and maintainability (Stenmark and Malec, Reference Stenmark and Malec2015). Cognitive robot agents equipped with disassembly-related knowledge can further enhance the automation level of the automatic disassembly process (Vongbunyong et al., Reference Vongbunyong, Kara and Pagnucco2015; Favi et al., Reference Favi, Germani, Mandolini and Marconi2016). Similarly, developing knowledge-based systems that incorporate specific knowledge and skills can effectively improve disassembly efficiency (Lie et al., Reference Lie, Aziz, Wahab, Rahman and Azhari2018). In addition, KG sub-technology has also found applications in this domain. For example, ontology-based dynamic modeling methods are used for knowledge modeling applications, enabling the reflection of the current state and dynamic capabilities of industrial robots in the disassembly process, as well as mining association relationships from disassembly process data (Zheng et al., Reference Zheng, Xu, Zhou, Pham, Qu and Zhou2017). In terms of knowledge reasoning application, ontology-based and case-based reasoning methods have facilitated fully automated and cost-saving disassembly decision-making processes for products (Sylla et al., Reference Sylla, Guillon, Vareilles, Aldanondo, Coudert and Geneste2018). The emerging field of collaborative assembly between humans and robots has garnered significant attention in industrial robotics. In the context of HRC, semantic knowledge-based reasoning frameworks can assist intelligent devices in deriving the intention of human-computer interaction during the teaching process (Akkaladevi et al., Reference Akkaladevi, Plasch, Hofmann and Pichler2021).

The lack of updated awareness and technology in disassembly waste products has hindered the ability to design efficient disassembly operation strategies for retired power batteries, taking into account their types, service conditions, and other relevant factors. In various industries, product similarity is often observed in aspects such as composition structure, operational data, and functional characteristics, among others. The attributes of products play a crucial role in determining their assembly and disassembly processes. Therefore, investigating the mechanism of knowledge reuse between similar products is highly significant in enhancing the efficiency of disassembly strategy design. By leveraging existing knowledge and experience from similar products, it becomes possible to optimize and streamline the disassembly processes, leading to improved efficiency and effectiveness.

In the field of intelligent manufacturing, there is a growing desire to enable production machines to achieve automation and even intelligence through the application of intelligent algorithms (Schoettler et al., Reference Schoettler, Nair, Ojea, Levine and Solowjow2020). One such algorithm, deep reinforcement learning, is capable of strengthening itself by obtaining feedback through continuous interaction with the environment. This mechanism allows it to establish a self-learning and self-evolving process within the manufacturing domain (Tan et al., Reference Tan, Feng and Ong2010; Wang et al., Reference Wang, Ma, Zhang, Gao and Wu2018). The advantages of deep reinforcement learning have led to its widespread application in various manufacturing scenarios, with ongoing improvements in its practical implementation (Ding et al., Reference Ding, Ding, Wei and Han2019). For instance, the research team at the State Key Laboratory of Digital Manufacturing Equipment and Technology of Huazhong University of Science and Technology has leveraged the maximum entropy reinforcement learning framework in the context of shaft hole assembly. This approach enables the learning of effective assembly strategies, with the added benefit of skill transferability between tasks, making it applicable to real-world assembly scenarios while minimizing the need for interaction in the physical environment (Arana-Arexolaleiba et al., Reference Arana-Arexolaleiba, Urrestilla-Anguiozar, Chrysostomou and Bøgh2019). Lv et al. (Reference Lv, Zhang, Liu, Zheng, Jiang, Li, Bao and Xiao2022)) proposed a strategy transmission method for intelligent HRC assembly. In this framework, the participating robots are regarded as agents utilizing reinforcement learning, and the approach incorporates transfer learning principles. Reinforcement learning has significantly enhanced the learning ability and intelligence level of robots, particularly in acquiring control strategies in fixed scenes. However, the generalization ability of the reinforcement learning model is often limited, resulting in poor stability and usability in dynamic operational environments.

Previous research on HRC based on KT has primarily focused on transferring operational strategies across different product varieties. These approaches typically involve modeling static data using natural language processing models and transferring similar processes from a source domain to a target domain. However, when dealing with end-of-life power batteries of different varieties and retirement conditions, obtaining information solely from previous static data is insufficient. Real-time scene perception is necessary to acquire the retirement status of power batteries in actual disassembly scenarios.

AR can address these challenges by reducing the cognitive burden on operators and providing real-time information on the disassembly status and retirement conditions of power batteries through scene perception. AR leverages its convenient and efficient human-computer interaction capabilities to develop and learn new disassembly methods in real time, as well as update knowledge. During subsequent disassembly processes, when encountering tasks with similar retirement conditions, existing disassembly strategies can be efficiently applied through KT to perform effective disassembly operations.

Research method

Framework of AR-HRC system based on KT

The framework of the proposed KT-based AR-HRC system is depicted in Figure 1. The system utilizes scene perception to capture the actual disassembly scenario, which includes the operator’s skeleton key point positions, action information, tools used, and the disassembly status of the power battery. These captured elements are then utilized to construct the corresponding scene graph. Subsequently, the disassembly subgraph is extracted from the scene graph. The feature vectors of the constructed subgraph are compared with the corresponding process subgraphs in the existing disassembly KG using similarity measurement. If the similarity score exceeds a predefined threshold, it indicates the existence of corresponding disassembly methods in the existing knowledge. Consequently, the system can retrieve and present the disassembly information related to the corresponding process for AR-HRC. Conversely, if the similarity score falls below the threshold, it signifies that no applicable methods are present in the DT-based data manager. In such situations, the operator is required to issue instructions to the robot through AR, devise new disassembly strategies based on the actual circumstances, and save the new strategies.

Figure 1. Framework diagram of AR-HRC system based on KT.

AR-HRC knowledge modeling

The AR-HRC system based on KT encompasses both static and dynamic data in the modeling phase. Static data modeling involves information that can be obtained before the actual disassembly process commences. On the other hand, dynamic data refers to real-time information acquired during the disassembly process. To handle static data described in natural language text, such as process flow cards and rulebook, natural language processing techniques are employed. These techniques include entities, their attributes, and relationships hidden within natural language text that are automatically extracted. The extracted information is then stored in a graph database in the form of RDF triplets. For dynamic data acquired in real time through scene perception, such as power battery disassembly status and robot running status, a semantic mapping mechanism in D2R Open-source software is utilized. This mechanism converts the real-time data into RDF triples. To express complex relationships between entities or entity attributes in relational databases, a simple binary relation is employed. The specific techniques used are shown in Figure 2:

Figure 2. Task data and environmental data acquisition for HRC.

Static data modeling

The disassembly process of HRC encompasses multi-dimensional data, including products, robots, and processes. To achieve a consistent and standardized representation of these heterogeneous data sources and establish a foundation for future disassembly knowledge discovery, the information modeling is performed using a knowledge graph. The KG model layer of the disassembly process is constructed to enable a knowledge-based description of the multidimensional data, such as products, robots, and processes, involved in the HRC disassembly process. The disassembly task pattern layer describes the division of multiple task units required to complete the disassembly operation. It reflects the procedural objectives and outlines the specific execution process for the disassembly of product components through a series of steps. Figure 3 illustrates how different disassembly operations can be carried out by humans, robots, or in collaboration. The symbols used in the figure correspond to the operator (H), robot (R), augmented reality (AR), power battery (B), component (A), part (P), feature (F), operation (O), tool (T), and function (Fun).

Figure 3. Disassembly process KG mode layer.

The disassembly process is presented in formula (1).

(1) $$ P=\left\{H\hskip0.35em \cup \hskip0.35em R\hskip0.36em \cup \hskip0.35em AR\hskip0.35em \cup \hskip0.35em B\right\} $$

where H represents the operations that the operator needs to perform in this process, R represents the operations that the working robot needs to perform in this process, AR represents the functions accomplished by AR, and B represents the changes in the disassembly status of the power battery. With the mode layer of the disassembly process KG, it becomes possible to capture the overall HRC disassembly process of the power battery, assisted by AR.

The mode layer of the disassembly process knowledge graph primarily focuses on describing the division of multiple task units and operations required to complete the disassembly operation. It provides insights into the flow and progression of the disassembly operation. The disassembly process knowledge graph (DPKG) mode layer is presented in formula (2).

(2) $$ DPKG=\left\{A\hskip0.35em \cup \hskip0.35em P\hskip0.35em \cup \hskip0.35em 0\hskip0.35em \cup \hskip0.35em T\hskip0.35em \cup \hskip0.35em S\right\} $$

where A represents the component, P represents the connector, O represents the operation or disassembly method performed, T represents the tools required for disassembly, and S represents the semantic relationship between disassembly units. The details are presented in formula (3). It expresses the operations or functions that robots, humans, and ARs need to perform in a disassembly sequence of power batteries, as well as the connection relationships between the components of the dismantled power batteries in this process.

(3) $$ {\displaystyle \begin{array}{l}S=\sum \limits_{i,j=1}^n Is\left(\left({A}_i,{P}_j\right)\right)\\ {}\hskip3em \cup \sum \limits_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{Before}\left({\mathrm{A}}_{\mathrm{i}},{\mathrm{A}}_{\mathrm{j}}\right)\cup \sum \limits_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{After}\left({\mathrm{A}}_{\mathrm{j},}{\mathrm{A}}_{\mathrm{i}}\right)\cup \sum \limits_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{Parell}\left({\mathrm{A}}_{\mathrm{j}},{\mathrm{A}}_{\mathrm{i}}\right)\\ {}\hskip3em \cup \sum \limits_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{Before}\left({\mathrm{O}}_{\mathrm{i}},{\mathrm{O}}_{\mathrm{j}}\right)\cup \sum \limits_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{After}\left({\mathrm{O}}_{\mathrm{j},}{\mathrm{O}}_{\mathrm{i}}\right)\cup \sum \limits_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{Parell}\left({\mathrm{O}}_{\mathrm{j}},{\mathrm{O}}_{\mathrm{i}}\right)\\ {}\hskip3em \cup \sum \limits_{\mathrm{i}=1}^{\mathrm{n}}\mathrm{Has}\left(\mathrm{B},{\mathrm{A}}_{\mathrm{I}}\right)\cup \left(\mathrm{H},{\mathrm{O}}_{\mathrm{i}}\right)\cup \left(\mathrm{R},{\mathrm{O}}_{\mathrm{i}}\right)\cup \sum \limits_{\mathrm{i}=1}^{\mathrm{n}}\mathrm{Realize}\left(\mathrm{AR},{\mathrm{Fun}}_{\mathrm{i}}\right)\\ {}\hskip3em \cup {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{ConnectorBy}\left({\mathrm{A}}_{\mathrm{i}},{\mathrm{P}}_{\mathrm{j}}\right)\cup {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{Operated}\left({\mathrm{P}}_{\mathrm{i}},{\mathrm{O}}_{\mathrm{j}}\right)\\ {}\hskip3em \cup {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{n}}\mathrm{Use}\left({\mathrm{O}}_{\mathrm{i}},{\mathrm{T}}_{\mathrm{j}}\right)\end{array}} $$

Dynamic data modeling

The end-of-life management of retired power batteries requires adaptation to diverse varieties, varying decommissioning conditions, and the need for mass disassembly and recycling. However, the traditional knowledge graph technique for disassembling retired power batteries is insufficient in offering adequate guidance strategies due to limitations posed by factors such as service life and variety. As the disassembly data continues to be continuously updated, the sequential dynamic KG emerges as a valuable resource for providing guidance schemes for novel disassembly tasks and enhancing work efficiency.Dynamic data can be obtained through scene perception, and the scene perception module is shown in Figure 4.

Figure 4. Scene perception graph.

The dynamic update of the KG is achieved through memory fusion, which involves matching and merging external knowledge with its own memory. A multi-level representation model is utilized to match, correlate, and fuse inherent memory and learning memory. Since static data lacks temporal characteristics, it is initialized as knowledge in inherent memory. In conjunction with real-time scene perception of the power battery to be disassembled, algorithm modules such as semantic similarity, semantic correlation, contextual relationships, and entity disambiguation are used to match the entities within the instance. This process facilitates memory fusion, leading to the generation of learning memories to be stored. Consequently, temporal relationships connect both the learning memory and the knowledge entities in the inherent memory. The update of the KG comprises two main layers: the pattern layer and the data layer. Updating the KG pattern layer, acting as a template, enables the automatic population of multi-level instance data based on the corresponding ontology pattern layer. During the update of the power battery disassembly KG, manual correction is supported by the integration of learning memory and inherent memory. Additionally, adjustments are made to the ontology model architecture and parameters of the KG to enhance its knowledge applicability. Refer to Figure 5 for a visual representation of these processes.

Figure 5. Dynamic update of KG of decommissioned power battery disassembly.

Figure 6 presents an illustrative example that demonstrates the comparison of spectra before and after an update. In this scenario, if the P1 connector between A1 and A2 is lost due to scene awareness, it results in the disappearance of its entity, as well as its relationship with A1 and A2, and the associated operations. Simultaneously, the detection of damage to the A1 component allows for the indication of its damage situation through its corresponding attributes.

Figure 6. Comparison chart of atlas before and after updating.

Similarity assessment of disassembly subtasks

Power batteries of different varieties and retirement conditions often exhibit similar disassembly structures and processes. The knowledge base stores local topological structures that share similar representations. Accumulated raw data allows for swift retrieval of solutions from the knowledge base. Disassembly process knowledge is stored in the KG as triplets, facilitating rapid identification of identical or similar disassembly processes through disassembly subgraph similarity. Figure 7 demonstrates the combination of dynamic data acquired during the actual disassembly process with the original static data, resulting in the generation of a disassembly process subgraph. This subgraph is then matched with existing disassembly process cases in the knowledge base. Graph-matching neural networks are utilized to assess the similarity between two subgraphs, determining the similarity of the corresponding disassembly processes.

Figure 7. Similarity calculation of cross-domain migration for HRC disassembly strategies.

Table 1 presents component node IDs along with their corresponding node triplets. The system utilizes the cross-linear KG alignment via Graph graph-matching neural network (GMNN) as its graph-matching model. To enhance matching efficiency, this model does not evaluate the equivalence of relationships between two subgraph nodes. Consequently, the relationship within a triplet can be substituted with any number, which serves no semantic purpose other than denoting a relationship between the two nodes.

Table 1. Partial component node IDs and corresponding node triplets

AR-HRC KT update

As illustrated in Figure 8, in the proposed framework, the process subgraph of the current disassembly product is compared with the corresponding process subgraphs in the existing disassembly KG using a graph-matching neural network to assess their similarity. The resulting similarity score is then evaluated against a threshold. If the similarity score exceeds the threshold, it signifies the presence of similar processes in the knowledge repository. Conversely, if the score falls below the threshold, it indicates the absence of similar processes.

Figure 8. AR-HRC KT update.

For the process that is similar to historical ones, corresponding disassembly strategies can be retrieved from the knowledge repository. However, for dissimilar tasks, a human-robot interactive self-learning approach is employed. In this approach, the human operator guides the robot through AR for the disassembly task, and the experience is stored in an experience pool. The scene perception module records the retirement status of the power battery, disassembly states, and human operations during the process. AR and the robot work together to acquire collaborative strategy knowledge, complementing and enhancing the information in the knowledge repository. The updated knowledge graph is adjusted through AR. Compared to the conventional approach of humans using robot control panels for strategy development and path planning, formulating strategies through AR-based HRC is more efficient. With AR, the human operator can interact with the robot’s digital twin anytime and location by using the AR operation panel to develop strategies and plan paths, while the physical robot remains synchronized with its digital twin. Moreover, the results of scene perception can be visually displayed to users through AR, providing a more intuitive experience.

As illustrated in Figure 9, in situations where the internal hexagonal screw slips and cannot be loosened with an Allen wrench, and there is no existing strategy in the knowledge repository, the operators must devise a new strategy based on their experience. In this case, a drilling machine is required to grind the slipping screw into a slightly larger internal hexagon shape. Subsequently, a larger-sized Allen wrench is used to unscrew it. During the grinding process, the robot is responsible for securing the work piece connected to the screw. Specific instructions and fixation point coordinates are provided to the robot for executing the corresponding operation. The scene perception module identifies the three-dimensional positions of the various components, and AR provides a visual representation to operators, allowing them to intuitively determine the three-dimensional positions of the components. Through AR-HRC, operators can manipulate the virtual robot to reach the specified positions. Importantly, the physical robot maintains synchronized poses with the virtual robot throughout this process.

Figure 9. Example diagram of new strategy development.

Case study

Disassembly environment

The AR-HRC disassembly environment encompasses various elements, including multiple data sources, entities, and dynamics characteristics. It consists of a worker, a robot, AR glasses, depth cameras, power batteries, tools, and a workbench. Figure 10 shows cases of the AR-HRC disassembly environment, which leverages digital twins and can be segregated into two sections: the virtual disassembly environment and the physical disassembly environment. The digital twin system integrates four primary modules: (1) the virtual simulation module: executes the current collaborative strategy and generates heterogeneous data related to the disassembly process. It simulates the actions and interactions of the virtual components involved in the disassembly task; (2) the data analysis and processing module: upon receiving the simulation data, this module analyses the transmission effect of the HRC strategy. It assesses how well the collaborative strategy is being executed and identifies areas for optimization. It transforms the optimized strategy into actual control commands that drive the AR-HRC system; (3) the AR-HRC physical control module: receives the control commands generated by the data processing module and transfers them to the digital twin of the robot. The real robot then follows the actions of the virtual robot, replicating the operations in the physical disassembly environment; and (4) the AR-HRC knowledge update module: as humans are assisted by AR and the robot, this module facilitates the acquisition of collaborative strategy knowledge. It enables the system to learn and accumulate relevant information to enhance and complement the existing knowledge base. Operators can utilize AR to adjust the updated graph, ensuring that the system remains up-to-date and aligned with the latest insights. The specific functionalities and visualization content provided by AR are depicted in Figure 11, including AR simulation, disassembly strategy updates, scene awareness, and AR disassembly guidance.

Figure 10. Disassembly environment.

Figure 11. AR visualization content.

Construction of disassembly diagram for power batteries

The data layer is the most fundamental level in a knowledge graph, which includes actual data, facts, and entities. These data are usually organized in the form of graphs, where nodes represent entities and edges represent relationships between entities. The pattern layer is located above the data layer and defines the structure, semantics, and constraints of entities and relationships in the data layer. The pattern layer includes definitions of entity types, attributes, relationship types, and constraints. We defines the relationships between humans, AR, robots, tools, and batteries through the knowledge graph pattern layer. In this section, the Protégé ontology modeling tool was used to construct the disassembly process KG pattern layer for the disassembly process, as depicted in Figure 12. The ontology modeling form employed can be expressed using the RDF/XML language, which facilitates easy parsing by computer programs. Therefore, it serves as a suitable data template file for the KG data layer.

Figure 12. DPKG pattern layer.

Once the disassembly sequence KG schema layer was obtained, the structural information concerning the disassembly between components was obtained by integrating CAD models and disassembly operational manuals. Subsequently, the instance data layer was developed using neo4j, as illustrated in Figure 13. In the figure, the red nodes represent power battery models, the blue nodes represent component units, the light blue nodes represent part units, the dark blue nodes represent connector units, the pink nodes represent the disassembly operations corresponding to the current part, and the gray nodes represent the tools used.

Figure 13. DPKG data layer.

Disassembly subgraph similarity matching

This experiment used the S471 standard C package and S472 standard G package as objects for similarity matching experiments. The S471 package consisted of 137 nodes and 277 triplets, while the S472 package consisted of 145 nodes and 366 triplets. These datasets were used for training the model. The hardware resources utilized in the experiment included a GTX 1080Ti graphics card and 8GB memory. The hyperparameters of this model are shown in Table 2. As shown in Figure 14, it can be observed that the entity alignment loss achieved a minimum value of 0.366, while the classification accuracy reached a maximum value of 0.724.

Table 2. Hyper-parameters

Figure 14. Entity alignment loss (left), classification accuracy (right).

Demolition system comparison test

Comparative experiment on collaborative disassembly of slipping screws

In this experiment, the objective was to address a scenario where the mechanical arm needed to reach a specific position to fix a component when the screw slipped. The operator was assisted in grinding the screw into a larger size and then unscrewing it. A total of 10 individuals conducted 20 collaborative experiments each, and the results presented in Table 3 represent the average outcomes from all the experiments. The findings indicate that the AR-HRC method achieved a time of 18.34 s, which is shorter than the 23.48 s required by the Pre-programming method. This improvement can be attributed to the AR-HRC method’s ability to swiftly acquire accurate position information through AR, enabling the robot to navigate efficiently without unnecessary deviations. In contrast, pre-programming methods lack the capability to obtain real-time and precise position information through AR, leading to suboptimal robot paths. Moreover, the collision rate of the AR-HRC method was observed to be 0.2%, significantly lower than the 2.2% collision rate associated with Pre-programming. This reduction can be attributed to the AR-HRC method’s ability to detect the distance between the robot’s digital twin and key points on the operator’s skeleton. By maintaining a safe distance, the AR-HRC method proactively stops the robot and alerts the operator through AR in situations where the distance falls below the predefined safety threshold.

Table 3. The performance of collaborative disassembly with different methods

Experiment with evaluating overall disassembly efficiency

In the overall disassembly efficiency experiment, 10 operators were selected to conduct 20 disassembly tests per person, and MR-HRCD and pre-programming were conducted 10 times each. Two experiments were conducted alternately to prevent an increase in operator proficiency after 10 rounds of one experiment. The overall efficiency of the latter experiment has greatly improved. The 10 participants in the experiment had a male-to-female ratio of 1:1, all aged between 20 and 40 years old, including experienced old operators and inexperienced new operators, its ratio is one-to-one. Record the disassembly time of each experiment, remove the lowest and highest datasets for each experiment, and then take the average of the remaining eight sets of 80 sets of data for each person. At the same time, the working time and waiting time of operators and robots in each experiment were recorded. Working time refers to the total time spent by a machine or person on all operations in each experiment. The waiting time is the total time between two operations of the machine or person in each experiment, as shown in Figure 15. The total time is equal to the working time of the robot plus the waiting time of the robot. The overall time for the AR-HRC method was found to be 928 s, significantly lower than the 1157 s required by the pre-programming method. The working hours and waiting times of operators and robots using the AR-HRC method were observed to be lower compared to the pre-programming method. This can be attributed to the fact that pre-programming methods often result in redundant robot paths, as their motion trajectories, speeds, and start times are pre-determined and cannot be adjusted. In contrast, the efficiency of the operator may vary during the operation, leading to waiting time between robot actions. The AR-HRC system, through real-time scene perception, obtains accurate position information of the human body and objects, as well as ongoing operations. This facilitates timely assistance and enables optimal route planning for the robot. Consequently, the AR-HRC method reduces the working hours for robots and the waiting times for operators. The visualization provided by AR also enhanced the work efficiency of operators, resulting in reduced work time for operators and decreased waiting time for robots.

Figure 15. Comparison diagram of HRC disassembly efficiency.

This study selected two mainstream retired power battery models (S471 and S472, respectively) for method validation. The results showed that the AR-HRC system can improve the disassembly efficiency of retired power batteries and the safety of HRC. The proposed method can be applied to other retired power battery models, as long as the system learns the dataset of the relevant battery models. Based on the conditions of the university laboratory, the experimental results have proven that it can meet the requirements of improving disassembly efficiency and HRC safety. Hardware equipment is used in industrial applications.

Conclusions and future work

The proposed AR-HRC framework demonstrates effective improvements in the responsiveness of disassembly systems. The characteristics of the AR-HRC system are described from three aspects: intelligent generation of AR-HRC policies, similarity assessment, and policy transfer methods. Based on experimental analysis, the relevant conclusions can be drawn:

  1. 1) A KT-based AR-HRC system is proposed to address the flexible disassembly requirements of various types of power batteries and retirement conditions. This system enables adaptable disassembly processes tailored to different scenarios.

  2. 2) A KG-based method for AR-HRC disassembly sequence planning is introduced. The disassembly sequence KG is generated by modeling both static and dynamic data. This approach provides a comprehensive representation of the disassembly process.

  3. 3) A disassembly subgraph similarity matching method is proposed, which encodes dynamic data and evaluates process similarity by combining it with static data. This method enables efficient retrieval of similar disassembly processes from the knowledge base.

  4. 4) A KT update method based on the AR technique is presented, which integrates operator decisions with existing strategies through real-time interaction using AR. This allows for the generation of new disassembly strategies based on the actual disassembly situation.

Compared with previous related research, the proposed approach can improve human-computer interaction ability, shorten operators training time, and adapt to the disassembly requirements of multiple types of power batteries. With the popularization of electric vehicles, battery recycling and reuse have become increasingly important. This system can be used in the disassembly process of battery recycling, automatically identifying battery components, achieving efficient disassembly and sorting. It would help provide reliable technical support for battery recycling and recovery. This system can also be used in the fields of training and education to provide training on battery disassembly skills for practitioners. Through AR technology, virtual combat scenarios can be achieved, helping operators master the skills and technical points of battery disassembly, and improving work efficiency and safety. The AR-HRC system based on transfer learning can serve as a customized solution, providing personalized battery disassembly systems for enterprises. However, there are limitations in this study. The disassembly scenario requires different retirement conditions, and the recognition effect of instance segmentation will not be as good as its recognition effect in assembly scenarios. Considering the complexity of the disassembly scene, future research can explore incorporating scene graph representations to provide users with more intelligent and user-friendly AR assistance. Additionally, utilizing point clouds for 3D modeling, recognition, analysis, and development of the practical disassembly environment can enhance the quality of digital twin modeling.

Financial statement

This work is financially supported by the Municipal Natural Science Foundation of Shanghai (21ZR1400800), in part by the Priming Scientific Research Foundation for the Junior Researchers of Donghua University and the Graduate Student Innovation Fund of Donghua University (CUSF-DH-D-2022072).

Author contribution

J.L.: conceptualization, methodology, writing—original draft, writing—review & editing, funding acquisition. L.D.: conceptualization, methodology, data curation, writing—original draft, writing—review & editing. W.Q.: visualization, validation. H.Z.: conceptualization, resources.

Competing interest

The authors of this paper have no relevant financial or non-financial interests to disclose.

References

Arana-Arexolaleiba, N, Urrestilla-Anguiozar, N, Chrysostomou, D and Bøgh, S (2019) Transferring human manipulation knowledge to industrial robots using reinforcement learning. Procedia Manufacturing 38, 15081515.CrossRefGoogle Scholar
Akkaladevi, SC, Plasch, M, Hofmann, M and Pichler, A (2021) Semantic knowledge based reasoning framework for human robot collaboration. Procedia CIRP 97, 373378.CrossRefGoogle Scholar
Chan, WP, Hanks, G, Sakr, M, Zhang, H, Zuo, T, Van der Loos, HM and Croft, E (2022) Design and evaluation of an augmented reality head-mounted display interface for human robot teams collaborating in physically shared manufacturing tasks. ACM Transactions on Human-Robot Interaction (THRI) 11(3), 119.CrossRefGoogle Scholar
Cheng, Q, Zhang, S, Bo, S, Chen, D and Zhang, H (2020) Augmented reality dynamic image recognition technology based on deep learning algorithm. IEEE Access 8, 137370137384.CrossRefGoogle Scholar
Ding, D, Ding, Z, Wei, G and Han, F (2019) An improved reinforcement learning algorithm based on knowledge transfer and applications in autonomous vehicles. Neurocomputing 361, 243255.CrossRefGoogle Scholar
Fang, W, Fan, W, Ji, W, Han, L, Xu, S, Zheng, L and Wang, L (2022) Distributed cognition based localization for AR-aided collaborative assembly in industrial environments. Robotics and Computer-Integrated Manufacturing 75, 102292.CrossRefGoogle Scholar
Favi, C, Germani, M, Mandolini, M and Marconi, M (2016) Includes knowledge of dismantling centers in the early design phase: a knowledge-based design for disassembly approach. Procedia CIRP 48, 401406.CrossRefGoogle Scholar
Green, SA, Chase, JG, Chen, X and Billinghurst, M (2010) Evaluating the augmented reality human-robot collaboration system. International Journal of Intelligent Systems Technologies and Applications 8(1–4), 130143.CrossRefGoogle Scholar
Jia, C and Liu, Z (2020) Collision detection based on augmented reality for construction robot. Presented at the 5th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), pp. 194197.CrossRefGoogle Scholar
Jiang, Z, Hsu, CC and Zhu, Y (2022) Ditto: Building digital twins of articulated objects from interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 56165626.CrossRefGoogle Scholar
Kousi, N, Stoubos, C, Gkournelos, C, Michalos, G and Makris, S (2019) Enabling Human Robot Interaction in flexible robotic assembly lines: an Augmented Reality based software suite. Procedia CIRP 81, 14291434.CrossRefGoogle Scholar
Li, S, Zheng, P and Zheng, L (2020) An AR-assisted deep learning-based approach for automatic inspection of aviation connectors. IEEE Transactions on Industrial Informatics 17(3), 17211731.CrossRefGoogle Scholar
Li, Y., Gu, C., Dullien, T., Vinyals, O., & Kohli, P. (2019) Graph matching networks for learning the similarity of graph structured objects. In International Conference on Machine Learning. PMLR. pp. 38353845.Google Scholar
Lie, LW, Aziz, NA, Wahab, DA, Rahman, MNA and Azhari, CH (2018) Enhancing remanufacturing efficiency in Malaysia through a knowledge support system: a case study of brake callipers. International Journal of Industrial and Systems Engineering 28(4), 451467.CrossRefGoogle Scholar
Liu, H and Wang, L (2017) An AR-based worker support system for human-robot collaboration. Procedia Manufacturing 11, 2230.CrossRefGoogle Scholar
Lv, Q, Zhang, R, Liu, T, Zheng, P, Jiang, Y, Li, J, Bao, J and Xiao, L (2022) A strategy transfer approach for intelligent human-robot collaborative assembly. Computers & Industrial Engineering 168, 108047.CrossRefGoogle Scholar
Lv, Q, Zhang, R, Sun, X, Lu, Y and Bao, J (2021) A digital twin-driven human-robot collaborative assembly approach in the wake of COVID-19. Journal of Manufacturing Systems 60, 837851.CrossRefGoogle ScholarPubMed
Palmarini, R, del Amo, IF, Bertolino, G, Dini, G, Erkoyuncu, JA, Roy, R and Farnsworth, M (2018) Designing an AR interface to improve trust in Human-Robots collaboration. Procedia CIRP 70, 350355.CrossRefGoogle Scholar
Parsa, S and Saadat, M (2021) Human-robot collaboration disassembly planning for end-of-life product disassembly process. Robotics and Computer-Integrated Manufacturing 71, 102170.CrossRefGoogle Scholar
Qu, W, Li, J, Zhang, R, Liu, S and Bao, J (2023) Adaptive planning of human–robot collaborative disassembly for end-of-life lithium-ion batteries based on digital twin. Journal of Intelligent Manufacturing, 123.Google Scholar
Schoettler, G, Nair, A, Ojea, JA, Levine, S and Solowjow, E (2020) Meta-reinforcement learning for robotic industrial insertion tasks. In 2020 IEEE . In RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 97289735.CrossRefGoogle Scholar
Stenmark, M and Malec, J (2015) Knowledge-based instruction of manipulation tasks for industrial robotics. Robotics and Computer-Integrated Manufacturing 33, 5667.CrossRefGoogle Scholar
Sylla, A, Guillon, D, Vareilles, E, Aldanondo, M, Coudert, T and Geneste, L (2018) Configuration knowledge modeling: How to extend configuration from assemble/make to order towards engineer to order for the bidding process. Computers in Industry 99, 2941.CrossRefGoogle Scholar
Tan, AH, Feng, YH and Ong, YS (2010) A self-organizing neural architecture integrating desire, intention and reinforcement learning. Neurocomputing 73(7–9), 14651477.CrossRefGoogle Scholar
Vongbunyong, S, Kara, S and Pagnucco, M (2015) Learning and revision in cognitive robotics disassembly automation. Robotics and Computer-Integrated Manufacturing 34, 7994.CrossRefGoogle Scholar
Wang, J, Ma, Y, Zhang, L, Gao, RX and Wu, D (2018) Deep learning for smart manufacturing: Methods and applications. Journal of Manufacturing Systems 48, 144156.CrossRefGoogle Scholar
Wöhlke, G (1992) Automatic grasp planning for multifingered robot hands. Journal of Intelligent Manufacturing 3, 297316.CrossRefGoogle Scholar
Yu, J, Zhang, H, Jiang, Z, Yan, W, Wang, Y and Zhou, Q (2022) Disassembly task planning for end-of-life automotive traction batteries based on ontology and partial destructive rules. Journal of Manufacturing Systems 62, 347366.CrossRefGoogle Scholar
Zeng, A, Song, S, Welker, S, Lee, J, Rodriguez, A and Funkhouser, T (2018). Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 42384245.CrossRefGoogle Scholar
Zheng, Z, Xu, W, Zhou, Z, Pham, DT, Qu, Y and Zhou, J (2017) Dynamic modeling of manufacturing capability for robotic disassembly in remanufacturing. Procedia Manufacturing 10, 1525.CrossRefGoogle Scholar
Figure 0

Figure 1. Framework diagram of AR-HRC system based on KT.

Figure 1

Figure 2. Task data and environmental data acquisition for HRC.

Figure 2

Figure 3. Disassembly process KG mode layer.

Figure 3

Figure 4. Scene perception graph.

Figure 4

Figure 5. Dynamic update of KG of decommissioned power battery disassembly.

Figure 5

Figure 6. Comparison chart of atlas before and after updating.

Figure 6

Figure 7. Similarity calculation of cross-domain migration for HRC disassembly strategies.

Figure 7

Table 1. Partial component node IDs and corresponding node triplets

Figure 8

Figure 8. AR-HRC KT update.

Figure 9

Figure 9. Example diagram of new strategy development.

Figure 10

Figure 10. Disassembly environment.

Figure 11

Figure 11. AR visualization content.

Figure 12

Figure 12. DPKG pattern layer.

Figure 13

Figure 13. DPKG data layer.

Figure 14

Table 2. Hyper-parameters

Figure 15

Figure 14. Entity alignment loss (left), classification accuracy (right).

Figure 16

Table 3. The performance of collaborative disassembly with different methods

Figure 17

Figure 15. Comparison diagram of HRC disassembly efficiency.