Microsoft to release security AI product to help clients track hackers
The Peninsula
Microsoft Corp. plans to release artificial intelligence tools on April 1 that will help cybersecurity workers produce summaries of suspicious inciden...
Microsoft Corp. plans to release artificial intelligence tools on April 1 that will help cybersecurity workers produce summaries of suspicious incidents and ferret out the devious methods hackers use to obscure their intentions.
Microsoft unveiled its Copilot for Security about a year ago and has been trialing it with corporate customers ever since. Testers include BP Plc and Dow Chemical Co. and now number "hundreds of partners and customers,” according to Andrew Conway, Microsoft’s vice president of security marketing. Customers will pay a fee based on usage, much as they do with the company’s Azure cloud services.
The security Copilot is part of Microsoft’s ongoing effort to infuse its major product lines with artificial intelligence tools from partner OpenAI and persuade corporate customers to buy subscriptions.
While AI can help generate content and synthesize corporate data, it also makes errors that can be costly or embarrassing. Because computer security is so critical and the risks so high, Conway said the software giant has taken extra care with this Copilot. The software combines the power of OpenAI’s model with the massive troves of security-specific information that Microsoft collects.
"There are a number of things, given the seriousness of the use case, that we’re doing to address [risks],” he said, including seeking constant feedback on the product and where it falls short. "All of that said, security is still a place today where security products generate false positives and generate false negatives. That’s just the nature of the space.”