What Will Happen to Tech in 2025?
The Onymos leadership team boasts decades of combined experience in the technology industry, encompassing roles in cybersecurity, software development, the Internet of Medical Things (IoMT), and business strategy. This deep expertise enables them to anticipate emerging trends and industry shifts that will transform how enterprises operate and create new products and services to meet the ever-changing demands of their end-users.
Over the last few years we have seen tremendous shifts in security and data privacy, causing enterprises to rethink their software development strategies. We spoke with our Vice President of Engineering, Bhavani Vangala, to find out what she predicts will happen to the tech industry in 2025.
Privacy concerns will begin to phase out social-based authentication methods, paving the way for biometrics
“Over the past year and a half, numerous questions have been raised about data privacy in the tech industry. These concerns have emerged around AI model training (as highlighted by Zoom in 2023 and LinkedIn in 2024), user targeting, and data security practices more generally. For example, last year, Kaiser Permanente accidentally exposed over 13 million medical records to its third-party SaaS partners.
“Do you know exactly what occurs when you link your Gmail or Facebook account to another application or website for easier login? It may allow those companies to collect information related to your purchases, preferences, and, in some instances, personal details like protected health information (PHI). Most enterprises don’t understand these risks in-depth, which is how companies like Kaiser Permanente make such costly mistakes.
“There is also the question of what happens when — not if — a login is compromised. Could hackers or other bad actors access that information? All of these questions and concerns around data privacy and sharing will prompt enterprises in 2025 to rely on social-based login methods less frequently, leading to the adoption of safer, more secure methods like biometrics and two-factor authentication.”
Small Language Models will gain popularity for specific domains
“In 2025, growing concerns about cost, infrastructure, and privacy will fuel the development and utilization of small language models (SLMs) across multiple sectors, such as healthcare, law, government, and finance.
“While large language models (LLMs) can be beneficial in certain situations, they often necessitate specialized infrastructure and cost, making them less accessible to many companies, especially startups. Furthermore, significant privacy and security concerns exist around the data used in training LLMs. This concern is exceptionally high in healthcare, where safeguarding user (or patient) information is crucial, and model customization is preferred.
“Just think about it this way: would you be comfortable integrating an LLM based on accurate and misleading data from the web in a tool that assesses cancer risk? By utilizing SLMs, organizations in high-stakes sectors can develop tailored models based on domain-specific and accurate data while ensuring privacy, security, cost-effectiveness, and efficiency.”
Skepticism around AI will foster more thoughtful and scalable AI development
“AI has become a crucial topic for executives and technology leaders over the past two years, especially following the surge of generative AI tools and models in late 2022 and early 2023. However, there is growing skepticism about whether some current AI deployments are truly production-ready solutions or just conceptual and document-level implementations driven by hype—especially in the healthcare, finance, legal, and manufacturing industries.
“In 2025, enterprises will increasingly evaluate their end-customer needs to accurately gauge AI’s potential impact, assess the relevance of new AI trends for their businesses, and determine whether their development efforts will yield a return on investment. This evaluation process will lead to more enterprises focusing their AI efforts on delivering tangible and scalable AI that will solve a particular need or challenge for their customers and businesses.”
Precision inputs will matter less to the next wave of generative AI
“The next wave of generative AI model training in 2025 is set to be transformative, focusing on enhancing reasoning and inference capabilities to make AI responses more intuitive and aligned with human thought processes. OpenAI’s recent release of models like o1-preview models this shift, demonstrating significant improvements in inference and reasoning abilities.
“These advancements enable AI to process and respond to prompts with increased coherence and contextual awareness, marking a promising step forward. Current generative AI models often require detailed prompts and follow-up questions to provide accurate responses, especially for complex or nuanced inquiries. This reliance on precise inputs can make interactions feel less fluid, as users must know how to phrase questions specifically to achieve accurate responses.
“Ongoing training efforts over the next year will aim to make AI’s reasoning processes more natural and adaptable, enabling models to grasp context dynamically, consider implicit details, and apply logical steps similar to human thought patterns. This progression will allow models to not only answer questions but also infer intent and address nuanced needs effectively.”
Generative AI still won’t be a replacement for your developers
“While some industries may still be exploring AI solutions at a conceptual level, it’s clear that generative AI tools, especially large language models (LLMs), can improve productivity by summarizing vast amounts of data in minutes. This capability allows professionals to draw insights and make informed decisions more rapidly. However, AI will not replace critical roles like software developers and architects in 2025 — or even in the years to come.
“Generative AI tools can indeed produce code that developers and enterprises may leverage to accelerate software development. While this efficiency can appeal to enterprises looking to cut costs and speed up project timelines, AI-generated code requires careful review. The code these tools produce comes from existing text and data shared online, which are contributed for specific purposes or products. Therefore, it cannot be directly integrated by simple copying and pasting. Software developers and architects must thoroughly review, test, and adapt this code to fit specific software requirements and ensure it’s robust and maintainable over the long term. This means that software developers and architects remain essential to the software lifecycle. They bring the expertise necessary to adapt code to unique applications and to handle the ongoing evolution of software, making their roles crucial for successful development and deployment.
“As a result, AI is a tool that aids productivity but does not replace the need for experienced professionals in the development field. This blend of AI-driven efficiency and human oversight is particularly relevant across healthcare, finance, legal, retail, and manufacturing industries, where AI serves as an enabler rather than a replacement.”
More insights
To get more insights from Bhavani and the rest of our team on the trends shaping the technology industry and beyond, visit our news page.
You can also read or download the 2024 Onymos SaaS Disruption Report part I and part II.