An augmented reality(AR) pilot program is an essential step in the evaluation process of a new software because it allows for a real-world test of the software’s functionality and impact. During the pilot program it allows the evaluation team to assess the software’s performance in a controlled environment and gather feedback from users. This information can then be used to make informed decisions about the software’s viability, scalability, and any necessary improvements. Additionally, an AR pilot program provides an opportunity to identify and resolve any technical or operational issues that may arise before full-scale implementation.

Overall, an augmented reality pilot program allows for a more thorough evaluation of the software and reduces the risk of costly errors or failures that could occur in a full-scale implementation. To help better understand a pilot program process, we’ve outlined the following steps to ensure a successful pilot.

Purpose of the pilot program 

The purpose of the AR pilot program is to evaluate the effectiveness of an augmented reality software such as, XRMentor®, but you will want to be more specific and describe why you are evaluating it and reference where in the company it will be used and for what group of employees. During the pilot development phase, you will want to identify an internal champion. This person may not be directly involved in the pilot, but they will help you champion the pilot program and value of XRMentor® to improve operations or save time and money.   You will want to draft a short paragraph that describes how augmented reality can be used in maintenance operations.  This should be written as a quick brief intended for an executive audience and one the champion can easily provide, even in your absence.  Think of this as answering “why augmented reality” after you or the champion describe your pilot program. 

Define specific goals for the pilot program  

There are many ways to determine objectives for an AR pilot program.  We are proponents of the OKR method (Objectives and Key Results) and perform a simple exercise, walking customers through objective and goal setting as one of our first pilot tasks.  Regardless of what method you use, you will want to be very specific about the goals for your program to avoid boiling the ocean. 

Your goals should be specific, difficult yet achievable and time bound.  Pilots usually last around 90 days so consider what you can achieve in a 90 day project.  For example you may decide to train 20 technicians with less than 2 years’ experience to perform a specific maintenance task without error.  You can improve specificity of course by selecting the specific task, giving a timeline to train the employees, and giving them a time limit to execute the task.   

Identify the key performance indicators (KPIs) for evaluating the software 

Each organization likely has their own performance indicators.  It is best to start with your business performance indicators and align the work tasks performance metrics that predict or optimize the business indicators.  For instance the you may be an organization that monetizes the performance of the technicians therefore reducing errors and time on task may be important.  Similarly, those metrics may have value if you are repairing your company owned equipment to minimize downtime.  Choose the key task metrics that are best at predicting business and organizational goals. 

Define the criteria for selecting participants 

Your choice of participants must be derived from your stated goals, objectives and desired results. Let’s first define two types of participants, those that are managing and serving as training instructors and those that are the trainees or recipients of the training.  You should select project managers and instructors who are neutral with regard to the technology such as not to be biased toward adoption or rejection.  Since they are the delivery mechanism they should be fairly technology centric, not afraid to try something new, especially if the goal is to help them and the company perform more effectively or efficiently.  An unfortunately biased manager or instructor can adversely impact your results because you will be asking trainees to evaluate the software and experience and you want the evaluation to be of the software, not necessarily of the individual managing the effort.  

The trainees should be selected randomly if possible, from within the demographics and characteristics that align with your goals and control other variables that could adversely impact your results. For example, if you are trying to evaluate XRMentor® for ability to train new hires, select employees hired within a time frame where they may be less likely to have already partaken in other training events.  If you maintain various types of equipment, try to control for equipment type, brand/make model, year or other characteristics that may significantly change the work or challenges an employee may be confronted with.   It is important to select participants who meet certain characteristics that align with your goals and control others that may adversely impact your ability to interpret results.  

Implementation and Design of the Pilot Program  

Since the AR pilot program is new to your organization.  You will be provided an implementation manager and project manager.  Leverage them to best understand how to set up the software, the minimum requirements to run and access the technology.  You should therefore involve your IT department early and often.  This may mean sponsoring an internal project for them.  

A. Design of the Evaluation 

It is best to run a comparison of XRMentor® to an incumbent process.  This is commonly traditional video based training or live, hands on group training.  You will want to consider time, cost and effectiveness of course, but the comparison lends power to your result.  We recommend, if using ClassroomXR™, comparing to traditional video or hands on live training.  Collect data such as time to set up, time to schedule, time to deliver, cost to deliver, and results on synchronous and post training evaluations.  With this in mind, establish a hypothesis and then using basic research design establish the comparison system or process, select your data, and decide how you will collect the data and analyze it.   

You want to show that XRMentor® or more specifically ClassroomXR™ was easy to set up, quick to schedule, rapidly delivered more effective training and trainees learned more than your existing methods, if you have them.   

B. Training for participants on how to use the software  

XRMentor™ provides a train the trainer program.  That program can be a few hours to two days depending on your goals. It is best to dedicate time to the program but also time for self-learning so that the manager and instructor are proficient in using the software.  

C. Deployment of the software to participants  

Again, this must involve IT.  They will like require completion of industry standard questionnaires to assess cybersecurity, whitelisting the software so it can be accessed internally or allowing mobile apps to be downloaded on company networks.  Once that has been completed you can provide explicit instructions to end users or trainees on the process for accessing the software.  

D. Collection of data and feedback from participants 

You should collect both subjective and objective data from participants.  Participants should include instructors and trainees at  minimum and your data, aligned with the KPIs, objectives and goals, to evaluate the ownership experience.  The experience includes onboarding the software, working with IT, learning to use the software, the effectiveness of the instructor, the effectiveness of the content and of course, data related to whether the trainee actually learned  

Analysis of AR Pilot Program Results  

It is best to have selected data that you can perform a direct comparison between two groups.  This will show the power of XRMentor™ in helping you achieve your goals versus an incumbent process or tool.  We recommend that you compare the results between two groups at a minimum, as set forth in your goals.  For example, you may want to compare whether new hires learn to perform a task faster with less errors using XRMentor® than a group that was not trained, or was trained on a different system.  A simple analysis of what you found versus a control or a different software solution is a powerful way to communicate to decision makers.  

Beyond your goals and objectives you likely have a set of expected outcomes or subjective hypotheses.  You will want to summarize what you found against what you expected.  It is very common to think new technology is harder to learn or takes long to set up than incumbent processes, so compare what you found to be true with XRMentor® against what you expected to happen. 

It is common in an AR pilot program to discover challenges along the way.  Some of the most common challenges are related to IT and accessibility such as not being able to download software onto devices, connecting to a network or integrating with existing software systems. Summarize these challenges and turn them into action items so you can onboard the software long term.  

Recommendations and Next Steps  

We have found that XRMentor® usually solves 95% of the problems or has 95% of the features needed to achieve the stated goal.  Create a list of things that could be added to the software to get to 100%.  What are the features that are needed?  What processes are missing? What integrations with other software would be helpful? 

Scaling an AR pilot program is critical to enterprise deployment.  We recommend to go back to goal setting and determine what the best way to scale to achieve your goals.  You may have focused on new hires at three locations.  You can then choose to scale by adding more locations where you need to create recommendations for what facilities to select based on technological requirements to run the software.  You will have to determine if you want to increase the number of trainees within a single class, or run more training sessions. You could also scale the library of content choosing to focus on whether the results from the first pilot hold when teaching different skills.  Either way, you will need to work with your management team and IT to determine how to take the next step.  

C. Discussion of plans for future evaluations 

While determining how to scale, you will want to discuss how to execute future evaluations.  Holding firm on your KPIs, you will want to test XRMentor™ for its ability to address other skills or abilities or for use in other departments in the organization beyond maintenance such as service advising or warranty claims an more.  

VII. Conclusion  

Keep this short and to the point.  Answer the question of whether you achieved your goal, share the key results and recommend how to move forward. Short, sweet and just enough for the Champion to share in an elevator ride.  

A. Summary of key findings  

Go back to your executive summary from the start of this where you gave your champion a short brief on what the software is and does.  In about the same about of space, summarize what you found.  Report the bottom line, did it work?  Then in priority order list in bullets the key kinds.  For example, “XRMentor™ trained 30 new hires to perform X, faster and with fewer errors than traditional video based training”.  Follow that statement by giving the statistics related to how must faster and how many fewer errors were made.   

B. Final thoughts on the potential of augmented reality technology in maintenance. 

Lastly, conclude your effort by making the recommendation that XRMentor™ should be adopted if that is your finding, and then applied to future areas that you may have identified early.   Conclude, summarize and recommend the next steps and long term vision of what you think the company should do next.