How Artificial Intelligence Will Impact the Future of Security – News Center

charlie bell, Microsoft Executive Vice President, Security, Compliance, Identity and Management.

Innovation has accelerated since we became a digitized society, and some of the innovations that have come with it have fundamentally changed the way we live: the Internet, smartphones, social networks or cloud computing.

As we have seen in recent months, we are on the cusp of a new wave in the technology landscape that will change everything: AI. As Brad Smith recently notedartificial intelligence and machine learning They have already reached the level of development expected ten years later, providing the revolutionary ability to drill down into vast data sets and find answers where previously we only had questions. we saw this a few weeks ago with important AI integration in Bing and Edge, This innovation demonstrates the ability to not only “reason” with agility based on a massive data set, but also empower people to make decisions in a new and different way that can have a huge impact on their lives. Could Imagine the scope for such scale and power in protecting customers from cyber threats.

As we watch the advances enabled by AI continue to accelerate, Microsoft remains committed to investing in tools and research, as well as collaborating with industry to create safe, sustainable and responsible AI for all. our approach Gives top priority to listening, learning and improving.

In the words of Spider-Man creator Stan Lee, with vast computing power comes great responsibility for those who are developing and providing security for new AI solutions and machine learning, Security is one area that will feel the impact of AI deeply.

AI will change the rules of the game

It has long been a belief that cyber criminals are ahead in agility. Opponents, with novel attack techniques, often have a comfortable advantage before being decisively recognized. Even those using classic techniques, such as turning third-party credentials or services into attack tools, enjoy the agility that gives them an advantage in a world where new platforms are constantly emerging. .

But this asymmetry can be fixed: AI has the potential to tip the agility balance in favor of defenders. AI allows them to see, categorize and contextualize more information, much faster than large teams of security professionals can collectively achieve. The tremendous speed and capabilities of AI give experts the ability to take advantage of attackers in cases of agility.

If we feed our AI with the right information, software that runs at the scale provided by the cloud will help us trace our real fleet of devices, identify strange spoofs, and quickly figure out which ones. Security incidents are clear and which are not. Really complicated steps. Another malicious scheme. And it will do it faster than a human could spin his or her chair to change from one screen to the other.

AI will lower the barrier of entry for cyber security careers

According to a study conducted by (ISC)2The world’s largest non-profit association of certified security professionals, the global cyber security workforce is approximately 4.7 million employees, including 464,000 in 2022. However, the same study indicates that an additional 3.4 million cybersecurity employees would be needed to effectively protect assets.

Security will always require humans and machines to work together, and strong AI automation will help us decide where to put human ingenuity. The more we can leverage AI to gain actionable, interactive insights into cyber risks and threats, the more power we’ll give to less experienced professionals starting their careers. In this way, AI opens the door for entry-level talent while freeing up more skilled experts to focus on bigger challenges.

The more AI works on the front lines, the greater the influence of security experts and their invaluable technical knowledge. This ultimately creates a huge opportunity and a wake-up call to hire data scientists, programmers and myriad profiles that can go deeper into the fight against cyber risk.

Responsible AI must be led by humans first

There are many dystopian visions that warn us of what misused or uncontrolled AI can become. How do we, as a global community, ensure that the potential of AI is used for good and not evil, and that people can trust that AI is doing what it’s supposed to be doing ?

Part of that responsibility falls on politicians, rulers and global powers. And the other part comes in the security industry, helping to build defenses that prevent criminals from using AI as a tool for attack.

No AI system can be effective unless it is based on the right data sets, is continually fine-tuned, and is subject to feedback and correction from human operators. As much as AI can help in combat, we humans are responsible for its performance, ethics and development. The data science and cyber security disciplines have much to learn from each other (and even from any area of ​​human knowledge and experience) as we explore responsible AI.

Microsoft is building a solid foundation for working with AI

In the early days of the software industry, security was not an important part of the development lifecycle, and we saw the advent of worms and viruses in the growing software ecosystem. Safety is embedded in everything we do today, learning from mistakes.

We see a similar situation in the early days of AI. We know that the time to protect these systems is now, while they are being built. To this end, Microsoft has invested in the security of this new frontier. We have a dedicated group of multidisciplinary professionals who are identifying how AI systems can be attacked, as well as how attackers can use the AI ​​systems themselves to launch attacks.

Recently the Microsoft Security Threat Intelligence team has some interesting announcements which mark new milestones in this work, including the development of innovative tools microsoft counterfitWhich are designed to help our security teams anticipate such attacks.

AI will no longer be the “tool” that solves security in 2023, but it will become increasingly important for customers to choose security vendors that can provide both large-scale threat intelligence and hyperscale AI. Combined, they will be elements that give customers an advantage over attackers when they are defending their environments.

we must work together to win over the bad guys

Making the world a safer place is not something that one group or company can do alone. To achieve this goal, we need both industry and government to work together.

Every time we share our experiences, knowledge and innovation, we weaken the bad guys. That’s why it’s so important that we work towards a more transparent future in cyber security. It is important to build a security community that believes in openness, transparency and mutual learning.

I think technology is on our side so far. While there will always be bad actors with bad intentions, the bulk of the data and activity used to train an AI model is positive and therefore the AI ​​will be positive.

Microsoft believes in a proactive approach to security, including investment, innovation and collaboration. By working together, we can help build a safer digital world and unlock the potential of AI.

tag: bing, Cyber ​​security, i a, machine learning

Leave a Comment