The recent cultural and financial success of ChatGPT has sparked interest in generative AI as a whole, and that includes cybersecurity as well. However, experts do not agree on whether the current period is more focused on marketing or technology development.
Undoubtedly, Microsoft's multimillion-dollar investment in OpenAI last fall, which resulted in the integration of the chatbot with the Bing search engine, contributed to the enormous popularity of ChatGPT. As a result of that investment, a series of "AI-powered" products have entered the market in the last six months. As several manufacturers were launching AI-powered products at the RSA Conference in April 2023, generative AI was the unofficial theme there.
We posed the question "How do cybersecurity professionals use ChatGPT?" to the chatbot itself, and it responded with several examples:
- Security policy and training and awareness documents
- Vulnerability assessments, including conducting analyses, interpreting reports, and suggesting corrections
- Threat hunting, which involves analyzing logs, identifying patterns, and detecting indicators of compromise
- Threat intelligence analysis, such as streamlining reports to relevant data and quickly gathering information from security advisories and online forums.
Although the success of the technology is still unknown, there are several uses for generative AI in cybersecurity. However, it is important to note that while ChatGPT can provide helpful assistance, cybersecurity professionals must exercise caution and rely on their expertise. They should seriously evaluate the information provided by the chatbot and verify its accuracy, consult trusted sources, and follow accepted security procedures.
At some point, even with automated AI technologies, human verification is still required.