Stay on the cutting edge. Subscribe to our blog.
How has the vendor defined the goal of ML?
Related to their data science experts, a vendor incorporating ML into their systems needs to establish goal functions. It’s absolutely critical, because if the data scientists don’t define the problem the ML system is solving with sufficient accuracy and detail, you’ll get unreliable results, no matter how good the algorithm and supporting data sets may be.
For example, let’s say the goal is defined as, “determine secure connections to MySQL,” but inputs don’t account for usability requirements, a reasonable outcome for an ML algorithm could be that no connections are secure. Strictly speaking, this might be true, but a completely secure database that’s also completely unusable isn’t exactly the outcome anyone wants.
To balance security and useability, the vendor’s data science team should carefully characterize:
- the problem;
- the goal; and
- the learning process, including what margin error is acceptable.
It’s important for everyone to agree from the beginning on acceptable levels of accuracy and what kinds of mistakes can be tolerated.
What is the vendor’s testing process?
Just like any other technology, ML requires a lot of testing and validation to make sure it’s effective. You need to understand the logic flow behind the data, and regularly test the system to make sure you’re flushing out false positives. So, if you have a vendor touting their ML capabilities, make sure to discuss their testing process, how they adjust logic flows, data input and the algorithms they use.
We’ve only just begun to scratch the surface of ML’s potential, and, as an emerging technology, the quality of different cybersecurity vendors’ ML capabilities differ a great deal. Savvy buyers need to dig deep into potential vendors’ offerings, asking the tough questions that will provide the answers you need to separate what’s real from what’s just advertising copy.