Episódios
-
Low-code and no-code platforms are revolutionising application development by empowering technical and non-technical users to quickly and efficiently build powerful applications. These platforms provide intuitive visual interfaces and pre-built templates that enable users to create complex workflows, automate processes, and deploy applications without writing extensive lines of code.
By simplifying development, low-code and no-code tools open up software creation to a wider range of contributors, from professional developers looking to accelerate delivery times to business users aiming to solve specific problems independently. This democratisation of development reduces the demand for IT resources and fosters a culture of innovation and agility within organisations.
The impact of low-code and no-code technology extends beyond just speed and accessibility; it’s transforming how businesses adapt to change and scale their digital solutions. These platforms allow companies to quickly respond to evolving customer needs, regulatory requirements, and competitive pressures without the lengthy timelines associated with traditional development cycles.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Michael West, Analyst at Lionfish Tech Advisors about LCNC platforms and their benefits.
Key Takeaways:
Low-code and no-code platforms enable business solutions without coding.These platforms broaden the developer base to include non-technical users.Choosing the right platform involves considering functionality, standards, and vendor viability.Low-code platforms can handle enterprise-level applications effectively.AI integration is transforming how applications are developed.Democratisation of development addresses the shortage of professional developers.The market for low-code and no-code platforms is rapidly evolving.Future trends will focus on AI capabilities and user experience.Chapters:
00:00 Introduction to Low-Code and No-Code Platforms
02:59 The Evolution of Development Roles
05:49 Key Considerations for Adopting LCNC Tools
09:04 Democratizing Development and Innovation
11:59 Future Trends in Low-Code and No-Code Markets
-
The intersection of cryptography and GPU programming has changed the face of secure data processing, making methods for encryption and decryption much faster and more efficient than ever imagined. Cryptography is the science of encrypting data with intricate algorithms initially designed to operate on very intensive computational powers. GPU programming provides the ability to utilise parallel processing of graphics processing units in cryptographic processes so they perform with unmatched speed.
While continuously evolving, GPUs are furnishing the computational muscle to execute ever-higher-level cryptographic algorithms without performance penalties. Developers now fully avail of the power of GPU parallelism to perform several thousand encryption tasks simultaneously, which is difficult for traditional CPUs to keep up with. This efficiency is critical in this growing data and rising cyber threat era, where organisations need rapid encryption and robust security.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Agnes Leroy, Senior Software Engineer at Zama, about the significance of encryption in high-stakes industries, the role of women in tech and the importance of mentorship in overcoming barriers in the industry.
Key Takeaways:
GPUs have evolved from graphics rendering to critical roles in data security.Fully homomorphic encryption allows computations on encrypted data.Quantum-resistant methods are crucial for future-proofing encryption.High-stakes industries require robust encryption to protect sensitive data.Diverse environments in tech foster innovation and collaboration.The future of encryption technology is exciting and unpredictable.Chapters:
00:00 - Introduction to Cryptography and GPU Programming
01:08 - The Evolution of GPUs in Data Security
03:33 - Challenges in Traditional vs Modern Encryption
05:50 - Quantum Resistance in Encryption Techniques
07:40 - The Future of GPUs in Data Privacy
08:38 - Importance of Encryption in High-Stakes Industries
10:00 - Potential Applications of Fully Homomorphic Encryption
11:42 - Women in Tech: Overcoming Barriers
15:33 - Conclusion and Resources
-
Estão a faltar episódios?
-
LLMs and AI have increasingly become major contributors to transforming content creation today. Understanding and using prompt skills appropriately can help organisations optimise AI to generate high-quality content efficiently.
While AI offers multiple benefits, it's important to acknowledge the potential risks associated with its implementation. Organisations are advised to carefully consider factors such as data privacy, bias, and the ethical implications of AI-generated content.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Prof. Yash Shreshta, Assistant Professor at the University of Laussane, about prompt engineering and its benefits.
Key Takeaways
Prompts are commands given to AI to perform tasks.Prompt engineering allows users to communicate effectively with AI.Iterative processes improve the quality of AI outputs.Understanding LLM limitations is crucial for effective use.Collaboration can enhance the creative process with AI.The role of prompt engineering is rapidly evolving with AI advancements.Data privacy is a significant risk when using LLMs.Over-reliance on AI can lead to skill degradation.Organisations should integrate human creativity with AI.Regular training on prompt engineering is essential for maximising LLM benefits.Chapters
00:00 Introduction to Prompt Engineering and AI
01:30 Understanding Prompt Engineering
04:15 The Importance of Prompt Engineering Skills
06:37 Best Practices for Effective Prompts
08:31 The Evolving Role of Prompt Engineering
11:20 Risks and Challenges of AI in Organizations
13:15 The Future of Creativity with AI
-
The fraud division has witnessed a dramatic transformation in the age of artificial intelligence (AI). As technology advances, so do the methods employed by fraudsters. Modern criminals use sophisticated techniques, such as deep learning and natural language processing, to deceive individuals and organisations alike. Such techniques allow them to mimic human behaviour, manipulate data, and exploit vulnerabilities in security systems.
That’s why organisations are embracing AI's strengths to combat these evolving threats. AI-driven solutions can provide real-time detection of fraudulent activities, analyse vast amounts of data to identify patterns and anomalies, and automate response processes.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Xavier Sheikrojan, Senior Risk Intelligence Manager at Signifyd, about AI fraud.
Key Takeaways:
Fraudsters have evolved from individuals to organised criminal enterprises.AI enables fraudsters to scale their attacks rapidly and strategically.Phishing attacks have become more sophisticated with AI-generated content.Synthetic identities can be created easily, complicating fraud prevention.Opportunistic fraud is impulsive, while proactive fraud is well-planned.Businesses often fail to act post-breach due to resource constraints.Inaction after a breach can lead to repeated attacks by fraudsters.AI must be used to combat AI-driven fraud effectively.Balancing fraud detection with customer experience is crucial for businesses.Chapters:
00:00 Introduction to AI and Fraud
01:32 The Evolution of Cybercrime
05:43 AI's Role in Modern Fraud Techniques
09:55 Opportunistic vs. Proactive Fraud
12:44 Business Inaction and Its Consequences
15:59 Combating AI-Driven Fraud with AI
-
As AI technologies become more integrated into business operations, they bring opportunities and challenges. AI’s ability to process vast amounts of data can enhance decision-making but also raise concerns about data privacy, security, and regulatory compliance.
Ensuring that AI-driven systems adhere to data protection laws, such as GDPR and CCPA, is critical to avoid breaches and penalties. Balancing innovation with strict compliance and robust data security measures is essential as organisations explore AI’s potential while protecting sensitive information.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Erin Nicholson, Global Head of Data Protection and AI Compliance at Thoughtworks, about the importance of compliance frameworks, best practices for transparency and accountability, and the need for collaboration among various teams to build trust in AI systems.
Key Takeaways:
AI systems are powerful but require ethical and compliant design.Lack of standardisation in AI regulations poses significant challenges.AI models often need help with explainability and transparency.Compliance frameworks are essential for implementing AI in critical sectors.Documentation and audits are crucial for maintaining AI accountability.Baselining pre-AI processes helps build public trust in AI systems.Organisations should map regulations to the most stringent standards.Cross-functional collaboration is vital for effective AI compliance.Chapters:
00:00 - Introduction to AI, Data Protection, and Compliance
02:08 - Challenges in AI Implementation and Compliance
05:56 - The Role of Compliance Frameworks in Critical Sectors
10:31 - Best Practices for Transparency and Accountability in AI
14:48 - Navigating Regional Regulations for AI Compliance
17:43 - Collaboration for Trustworthiness in AI Systems
-
As organisations increasingly migrate to cloud environments, they face a critical challenge: ensuring the security and privacy of their data.
Cloud technologies offer many benefits, including scalability, cost savings, and flexibility. However, they also introduce new risks, such as potential data breaches, unauthorised access, and compliance issues.
With sensitive data stored and processed off-premises, maintaining control and visibility over that data becomes more complex. As cyber threats continue to evolve, robust data protection strategies are essential to safeguarding information in the cloud.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Sergei Serdyuk, VP of Product Management at NAKIVO, about the factors driving cloud adoption, the importance of having a robust disaster recovery plan, best practices for data protection, and the challenges of ensuring compliance with regulations.
Key Takeaways:
Cloud adoption is accelerated by low barriers to entry.Scalability in cloud environments is easier than on-premises.Data in the cloud is vulnerable and needs protection.The shared responsibility model places data protection on the user.A comprehensive disaster recovery plan is crucial for businesses.Regular testing of disaster recovery plans is essential.Data protection strategies must include regular reviews and updates.Compliance with data protection regulations is complex and varies by region.Balancing security and operational efficiency is a key challenge.Chapters:
00:00 - Introduction to Cloud Technologies and Data Protection
01:26 - Factors Accelerating Cloud Adoption
03:48 - The Importance of Data Protection in the Cloud
06:39 - Developing a Comprehensive Disaster Recovery Plan
10:05 - Best Practices for Data Protection
13:31 - Ensuring Compliance in Cloud Environments
15:56 - The Role of Continuous Monitoring in Data Protection
18:19 - Balancing Security and Operational Efficiency
-
Managed Service Providers (MSPs) are evolving beyond traditional IT support, becoming strategic partners in driving business growth. By embracing AI technologies, MSPs are improving operational efficiency, streamlining service delivery, and offering smarter solutions to meet modern challenges.
As businesses navigate digital transformation, MSPs are crucial in optimising IT infrastructure, enhancing security, and providing tailored solutions that fuel innovation. With AI-powered tools, MSPs meet today's demands and help businesses stay competitive.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Jason Kemsley, Co-founder and CRO of Uptime, about the proactive strategies that MSPs can adopt using AI, the challenges they face in implementation, and the ethical considerations surrounding AI solutions.
Key Takeaways:
MSPs are transitioning from traditional IT support to strategic partners.AI is enhancing operational efficiency but not replacing human roles.Proactive support is often neglected due to resource constraints.AI can help MSPs predict and prevent IT issues.Challenges in adopting AI include unrealistic expectations and lack of accountability.There is potential for MSPs to develop their own AI tools.Chapters:
00:00 - Introduction to Managed Service Providers (MSPs)
02:03 - The Evolving Role of MSPs in Business Growth
04:00 - AI's Impact on Service Delivery Models
07:22 - Proactive Support Strategies with AI
10:16 - Challenges in Adopting AI for MSPs
12:40 - Ethics and Accountability in AI Solutions
-
Trusted AI ensures that people, data, and AI systems work together transparently to create real value. This requires a focus on performance, innovation, and cost-effectiveness, all while maintaining transparency. However, challenges such as misaligned business strategies and data readiness can undermine trust in AI systems.
To build trusted AI, it’s crucial to first trust the data. A robust data platform is essential for creating reliable and sustainable AI systems. Tools like Teradata’s ClearScape Analytics help address concerns about AI, including issues like generative AI hallucinations, by providing a solid foundation of trusted data and an open, connected architecture.
In this episode, Doug Laney, Analytics Strategy Innovation Fellow with West Monroe Partners, speaks to Vedat Akgun, VP of Data Science & AI and Steve Anderson, Senior Director of Data Science & AI at Teradata, about trusted AI.
Key Takeaways:
Value creation, performance, innovation, and cost-effectiveness are crucial for achieving trusted AI.Trusting data is essential before building AI capabilities to avoid biases, inaccuracies, and ethical violations.A robust data platform is a foundation for creating trusted and sustainable AI systems.Generative AI raises concerns about hallucinations and fictitious data, highlighting the need for transparency and accountability.Teradata offers features and capabilities, such as ClearScape Analytics and an open and connected architecture, to address trust issues in AI.Chapters:
00:00 - Introduction and Defining Trusted AI
01:33 - Value Creation and the Importance of Driving Business Value
03:27 - Transparency as a Principle of Trusted AI
09:00 - Trusting Data Before Building AI Capabilities
14:51 - The Role of a Robust Data Platform in Trusted AI
21:09 - Concerns about Trust in Generative AI
23:03 - Addressing Trust Issues with Teradata's Features and Capabilities
25:01 - Conclusion
-
Balancing transparency in AI systems with the need to protect sensitive data is crucial. Transparency helps build trust, ensures fairness, and meets regulatory requirements. However, it also poses challenges, such as the risk of exposing sensitive information, increasing security vulnerabilities, and navigating privacy concerns.
In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Juan Jose Lopez Murphy, Head of Data Science and Artificial Intelligence at Globant, to discuss the ethical implications of AI and the necessity of building trust with users.
Key Takeaways:
Companies often prioritise speed over transparency, leading to ethical concerns.The balance between transparency and protecting competitive data is complex.AI misuse by malicious actors is a growing concern.Organisations must educate users on digital literacy to combat misinformation.Confidently wrong information is often more trusted than qualified uncertainty.Chapters
00:00 - Introduction to AI Transparency
03:03 - Balancing Transparency and Data Protection
05:57 - Navigating AI Misuse and Security
09:05 - Building Trust Through Transparency
12:03 - Strategies for Effective AI Governance
-
As organisations adopt AI, data literacy has become more critical than ever. Understanding data—how it's collected, analysed, and used—is the foundation for leveraging AI effectively. Without strong data literacy, businesses risk making misguided decisions, misinterpreting AI outputs, and missing out on AI’s transformative benefits. By fostering a data-driven culture, teams can confidently navigate AI tools, interpret results, and drive smarter, more informed strategies.
Ready to boost your data literacy and embrace the future of AI?
Key Takeaways:
Companies are optimistic about ROI from AI investments.Data literacy is crucial for effective AI implementation.Technical debt poses challenges for AI infrastructure.Data quality and governance are essential for AI success.Trust in AI systems is a growing concern.Organizations must start with clear business priorities.The Data Festival will provide practical insights for AI adoption.DATA festival is where theory meets practice to create real, actionable knowledge. This event brings together #DATApeople eager to drive the realistic applications of AI in their fields.
Leaving the hype behind, we look at the actual progress made in applying (Gen)AI to real-world problems and delve into the foundations to understand what it takes to make AI work for you. We’ll discuss when, where and how AI is best applied, and explore how we can use data & AI to shape ourfuture.
Sign up now to secure your spot
-
Data labelling is a critical step in developing AI models, providing the foundation for accurate predictions and smart decision-making. Labelled data helps machine learning algorithms understand input data by assigning meaningful tags to raw data—such as images, text, or audio—ensuring that AI models can recognise patterns and make informed decisions.
AI models struggle to learn and perform tasks effectively without high-quality labelled data. Proper data labelling enhances model accuracy, reduces errors, and accelerates the time it takes to train AI systems. Whether you're working with natural language processing, image recognition, or predictive analytics, the success of your AI project hinges on the quality of your labelled data.
In this episode, Henry Chen, Co-founder and COO of Sapien, speaks to Paulina Rios Maya about the importance of data labelling in training AI models.
Key Takeaways:
Data labelling converts raw data into structured data that machine learning models can recognise.Reducing bias and ensuring data quality are critical challenges in data labelling.Expert human feedback plays a crucial role in improving the accuracy of AI training data and refining AI models.Chapters:
00:00 - Introduction and Background
01:07 - Data Labeling: Converting Raw Data into Useful Data
03:02 - Challenges in Data Labeling: Bias and Data Quality
07:46 - The Role of Expert Human Feedback
09:41 - Ethical Considerations and Compliance
11:09 - The Evolving Nature of AI Models and Continuous Improvement
14:50 - Strategies for Updating and Improving Training Data
17:12 - Conclusion
-
Traditional KYC processes are inadequate against modern fraud tactics. While KYC helps with initial identity checks, it doesn't cover evolving threats like AI-generated deepfakes or ongoing account takeovers.
Curious about how to protect your business from the latest threats like fake IDs, account takeovers, and AI-generated deep fakes? Tune in to our latest episode, where we dive into the essentials of full-cycle verification and real-time transaction monitoring. Find out how AI and machine learning can revolutionise your fraud detection efforts and why staying updated with regulatory changes is crucial for maintaining top-notch security.
In this episode of Tech Transformed, Alvaro Garcia, Transaction Monitoring Technical Manager at Sumsub, speaks to Paulina Rios Maya, Head of Industry, about the manifestations of identity fraud during the user journey stages and the need for comprehensive fraud prevention measures.
Key Takeaways:
Identity fraud manifests at different user journey stages, including onboarding and transaction monitoring.Businesses must implement full-cycle verification and transaction monitoring solutions to detect and prevent fraud in real time.AI and machine learning are crucial in analyzing suspicious user behaviour and spotting complex fraud patterns.SumSub offers platform solutions that include KYC, business verification, transaction monitoring, and payment fraud protection.
Chapters:00:00 - Introduction and Overview
00:35 - Identity Fraud in the User Journey
02:01 - Types of Fraud and Fraud Prevention
04:20 - Real-Time Monitoring and Enhancing Systems
05:46 - Common Types of Fraud Faced by Financial Institutions
08:40 - The Challenge of AI-Generated Deepfakes
10:04 - Beyond KYC: Additional Measures for Fraud Prevention
12:29 - Prevention Measures and Synthetic Identity Fraud
15:21 - Effective Fraud Prevention Solutions
17:45 - Assessing the Effectiveness of Fraud Prevention Strategies
19:08 - Staying Up to Date with Regulatory Requirements
21:31 - Conclusion
-
AI is revolutionising contact centres by automating routine tasks, reducing response times, and enhancing customer experience. AI is built to handle simple inquiries efficiently and at scale.
It helps contact centres close the gap between customer expectations and conventional customer service by enabling engagement through digital channels. AI-driven analytics improve decision-making by capturing and analysing data from customer interactions. Organisations can overcome challenges by starting small and gradually building trust in AI's capabilities.
In this episode, Paulina Rios Maya, Head of Industry Relations at EM360 speaks to Jon Arnold, Principal at J Arnold & Associates about the use of AI in contact centres.
Key Takeaways:
AI is a transformative technology that rethinks customer engagement and addresses customer problems in contact centres.AI enables engagement through digital channels and helps contact centres close the gap between customer expectations and conventional customer service.AI-driven analytics improve decision-making by capturing and analysing data from customer interactions.Organisations can overcome challenges by starting small and gradually building trust in AI's capabilities.AI helps protect privacy and mitigate fraud in contact centres.Chapters:
00:00 - Introduction and Overview
01:06 - The Transformational Power of AI in Contact Centers
03:00 - Automating Routine Tasks and Enhancing Customer Experience
06:24 - Engaging Customers through Digital Channels
11:09 - Improving Decision-Making with AI-Driven Analytics
15:28 - Overcoming Challenges and Building Trust in AI
17:23 - Protecting Privacy and Mitigating Fraud in Contact Centers
-
Join us in this exciting episode of Tech Transformed, where we talk to Kelly Vero, a pioneering game developer, digital leader, and visionary in the metaverse. With a career spanning 30 years and a resume that includes contributions to legendary franchises like Tomb Raider and Halo 3, Kelly brings a wealth of knowledge and experience to the table.
Kelly’s unique journey in the tech world is nothing short of extraordinary. From joining the military to learn about ballistics for Halo 3 to founding the award-winning startup NAK3D, she has always pushed the boundaries of what’s possible.
Kelly Vero speaks to Paulina Rios Maya about the hurdles of being a woman in the tech industry, the principles of gamification, and the overhyped trends in AI and NFTs. They discuss what’s genuinely beneficial versus what’s just noise.
Key Takeaways:
The gaming industry offers opportunities for problem-solving and creativity.Role models can come from everyday people who inspire and support others.Gamification is about creating an engaging user experience that encourages return and contribution.The tech industry has seen beneficial changes in globalized platforms and a focus on quantum solutions.Chapters:
00:00 - Introduction and Background
02:28 - The Gaming Industry and Problem Solving
07:36 - Challenges and Role Models in the Tech Industry
11:54 - The Principles and Ethical Considerations of Gamification
18:30 - Beneficial Changes and Overhype in the Tech Industry
20:25 - Creating Digital Objects and the NFT Standard
23:45 - Introducing NAK3D: Bringing Non-Designers into Design
-
Strategic choices with significant implications mark Latin America's approach to AI. Many countries in the region have adopted AI technologies and frameworks developed by leading tech nations, focusing on imitation rather than innovation. This strategy enables rapid deployment and utilisation of advanced AI solutions, bridging the technological gap and fostering economic growth.
However, reliance on external innovations raises questions about the region's long-term competitiveness and ability to contribute original advancements to the global AI landscape.
In this podcast, Alejandro Leal, Analyst at KuppingerCole, speaks to Paulina Rios Maya, Head of Industry Relations, about how socio-economic factors, including limited research funding, infrastructural challenges, and the need for quick technological catch-up drive this pattern of imitation. While this approach has led to swift AI adoption, it underscores a dependence on foreign technologies and expertise.
Key Takeaways:
Trust, ethics, and legal considerations are essential challenges in integrating AI into the security infrastructure.Public-private partnerships and regional cooperation are crucial for advancing AI technology in the region.Interoperability and alignment with international and regional standards are key areas of focus to ensure the ethical use of AI in Latin America.Chapters:
00:00 - Introduction: AI in Latin America
00:58 - Key Initiatives in the Security Sector in Mexico
05:12 - Mexico's Evolution in National Security
07:33 - Challenges in Integrating AI into Security Infrastructure
12:40 - Comparison with Other Latin American Countries
19:26 - The Role of Public-Private Partnerships
22:51 - Focus on Interoperability and Alignment with Standards
23:46 - Conclusion: Ethical Use of AI in Latin America
-
AI fraud is not just a concern, it's a pressing issue. As artificial intelligence technologies advance, fraudsters are developing increasingly sophisticated methods to exploit these systems. Typical forms of AI fraud include deepfakes, which use AI to create convincing fake images, audio, or videos for disinformation, blackmail, or identity theft, and advanced phishing schemes that leverage AI to craft highly personalized and deceptive messages. Addressing and understanding AI fraud is not just crucial, it's urgent for individuals, businesses, and governments to protect against these evolving threats.
Join Alejandro Leal and Pavel Goldman-Kalaydin, Head of AI and Machine Learning at Sumsub, as they delve into the growing issue of AI fraud.
Themes:
AI fraudDeepfakesGlobal trendsRegulationChapters:
00:00 - Introduction and Background
01:13 - Understanding AI Fraud
02:11 - Examples of AI-Driven Fraud
08:02 - Global Trends in AI-Driven Fraud
13:55 - Preventing AI-Driven Fraud
18:11 - The Future Evolution of AI Fraud
22:38 - Conclusion and Final Thoughts
-
While generative AI and large language models often receive inflated acclaim, their true value is found in harnessing intelligence and data-driven insights.
Despite the hype, AI has its shortcomings, such as large language models sometimes being solutions looking for problems and challenges in understanding its impact on advertising and making savvy investment decisions.
In this podcast, Dana Gardner, President & Principal Analyst of Interarbor Solutions, and Paulina Rios Maya, Head of Industry Relations at EM360Tech, discuss why AI should be seen as a transformational technology rather than just another automation tool.
Key Takeaways
AI should be viewed as a transformational technology that can refactor our actions rather than just an automation tool.Large language models may be a solution in search of a problem, and their high costs and sustainability impacts should be considered.Making informed decisions about AI investments requires considering the potential benefits and costs.Chapters
00:00 - The Current State of AI Development
02:17 - Viewing AI as a Transformational Technology
03:45 - The Limitations of Large Language Models
05:40 - The Impact of AI on Advertising
06:37 - Navigating the Complexities of AI Investments
-
Infosecurity Europe is a cornerstone event in the cybersecurity industry. It brings together a diverse array of cybersecurity services and professionals for three days of unparalleled learning, exploration, and networking. At its essence, the event is committed to delivering indispensable value to its attendees through meticulously crafted themes and discussions.
This year's focus revolves around resilience, artificial intelligence, legislation and compliance, leadership and culture, and emerging threats. With a comprehensive program featuring keynote speakers, strategic talks, insightful case studies, state-of-the-art technology showcases, dynamic startup exhibitions, and immersive security workshops, Infosecurity Europe offers a comprehensive immersion into the dynamic landscape of cybersecurity.
Pioneering new approaches, the event introduces QR-based tools for seamless content collection and unveils a digital meetings platform to foster enhanced connectivity and collaboration. By participating in Infosecurity Europe, cybersecurity professionals equip themselves with the tools, knowledge, and connections vital for continuous growth, networking, and industry advancements.
In this episode of the EM360 Podcast, Paulina Rios Maya, Head of Industry Relations at EM360Tech, speaks to Victoria Aitken, Conference Manager and Nicole Mills, Senior Exhibition Director, to discuss:
Cybersecurity Trends Security events Infosecurity EuropeRegister here to attend Infosecurity Europe | 4–6 June 2024
Chapters00:00 - Introduction and Overview of Infosecurity Europe
01:33 - Key Themes and Topics at Infosecurity Europe
14:04 - New Approaches and Tools
16:21 - Women in Cyber and Analyst Sessions
-
Ever wonder how search engines understand the difference between "apple," the fruit, and the tech company? It's all thanks to knowledge graphs! These robust and scalable databases map real-world entities and link them together based on their relationships.
Imagine a giant web of information where everything is connected and easy to find. Knowledge graphs are revolutionizing how computers understand and process information, making it richer and more relevant to our needs.
Ontotext is a leading provider of knowledge graph technology, offering a powerful platform to build, manage, and utilise knowledge graphs for your specific needs. Whether you're looking to enhance search capabilities, improve data analysis, or unlock new insights, Ontotext can help you leverage the power of connected information.
In this episode of the EM360 Podcast, George Firican, Founder of LightsOnData, speaks to Sumit Pal, Strategic Technology Director at Ontotext, to discuss:
Knowledge Graphs Use Cases Ontotext GraphDBIntegration of AI Industry best practices
-
Gone are the days of merely safeguarding school computers! Censornet, a rising star in the tech industry, has undergone a remarkable transformation. From its roots as an internet security provider for educators, it has emerged as a trailblazing force in digital risk management.
Today, Censornet offers a comprehensive suite of tools designed to confront the dynamic challenges of the digital landscape, ensuring a safer and more secure online environment for all. This evolution stems from recognising that traditional threats are no longer the sole concern. With the proliferation of Shadow IT, unauthorised applications and devices, and the rise of insider threats, organisations face a complex array of risks.
In this episode of the EM360 Podcast, Jonathan Care, Advisor at Lionfish Tech Advisors, speaks to Gareth Lockwood, VP of Product at Censornet, to discuss:
Inspiration behind Censornet Censornet’s Capabilities Censornet’s Clients Shadow-IT Prevention of future vulnerabilities with AI and Censornet
- Mostrar mais