AI in Robotiq.ai
The role of AI (Artificial Intelligence) in the Robotiq.ai platform is essential. To begin with, we use it to simplify the process's development and provide robustness to our RPA robots.
Given that AI is quite a generic term, we'll explain how we use it on our RPA software.
Robotiq.ai platform uses ML (Machine Learning) and shallow neural networks. We needed to provide the cognitive capability to our software robot - to be more precise, our Vision Center (VC), which we implemented into our RPA platform and recorder application. The fundamental role of VC is to classify application elements and locate them on the screen.
What does it all mean?
Our robots recognize the elements in the application visually (buttons, text boxes, dropdown menus, etc.) instead of using so-called selectors. Selectors require users to have some technical knowledge, which shouldn't be necessary for working with RPA software.
Also, selectors are not available on all applications our platform uses, such as Java or Oracle Forms. They're often unreliable and can cause robots to crash.
Considering that we use VC, our robots are more stable and user-friendly.
How does Vision Center actually work in our RPA software?
When building a process in the Process Editor section, users will often have to include a "Click" step in it. This step is necessary for our robot's ability to click on buttons, and that's where our VC kicks in.
Instead of working with selectors, users can easily upload an image of the button that our robot will click on when executing the process. If the button's appearance changes, we can generate a new image and replace the old one. The entire process is very intuitive, and the user immediately knows what actions our RPA software will perform in that step.
The proper robustness of the Robotiq.ai RPA software is visible when the robot executes the processes we created. In steps in which it needs to click on the buttons we mentioned, our RPA robot takes a screenshot of the application and that button from the process. After that, it hands it over to Vision Center that searches for the button (or control) and its coordinates, and returns it back to the robot to perform the "Click" action.
If the screen resolution, location, or design of the button change, we can still locate the button due to the different technologies we implemented.
As demonstrated above, our powerful AI is focused on performance and delivering the best user experience.
Following releases will include some new features, such as predictive process design, scheduling assistant, etc. We're constantly working on simplifying the robot's development and maintenance!