Connect with us

AI

World of Watson 2016: Derek Schoettle Talks About His Experience of Data Science at IBM

Published

on

Derek Schoettle at World of Watson 2016

TL;DR; the team at IBM just made getting access to a world leading Data Science platform, integrated tools & collaboration, as simple as web-mail!!

Day one at World of Watson 2016 in Las Vegas in the USA just wrapped up, and I had the honour and privilege of being on stage with Derek Schoettle, General Manager of Analytics Platforms & Cloud Data Services at IBM, to share a conversation around the topic “a day in the life of a Data Scientist”, and as we discussed some of the fundamental challenges being faced by any organization or individual looking to embark on any form of Data Science journey.

I want to share with you a couple of key things which came out of our conversation, as Derek made a number of very exciting announcements, and we also touched on what I believe are three of the most fundamental key pillars of Data Science – I thought I would share them with with you as I’m certain you can put them to good use.

But before I dive into any of that, I’d first like to provide two quick quotes I took away from today’s keynote, and then some context based on my own experience over the years, so that you might fully appreciate the gravity of today’s announcements.

A COUPLE OF FUN QUOTES

“IBM’s Data Science Experience & Watson Data Platform were built with teams in mind, we believe Data Science is a team sport.”, Derek Schoettle ( IBM WoW 2016 )

“The Data Science Experience platform from IBM makes it possible for me to go from Zero to Hero in an instant.”, Dez Blanchfield ( IBM WoW 2016 )

FIRST A BIT OF CONTEXT

I’d like to set the scene for you, of just what various elements of “a day in the life of a Data Scientist” have till now been like in my experience, as I believe it will give you a far better appreciation for just how great a paradigm shift IBM’s latest offerings in Data Science and Analytics really are, and what they mean to anyone in this space.

THE GOOD OLD DAYS ( WELL LAST WEEK ACTUALLY )

For years now a regular challenge I’ve been given by organizations of all sizes, ranging from small teams of two or three in a startup of some form, through to large enterprise, multi-nationals and federal government agencies, is to “stand up” a full stack “platform” to support their desire, or more often their need, to apply Data Science in some for to a core business issue they are facing which their existing Information Management and Business Intelligence systems are not able to address, or often they might have an idea or initiative they wish to play out on how they might transform the way they do business, or simple want to explore how they might go about offering better products, services and support to existing customers, and of course entice new customers.

This basic challenge is all too often a “non-trivial undertaking”, both as a result of gaps in the knowledge, experience and skills within existing staff or teams in so many of these organizations to address something as straightforward as developing a requirements document to capture what the customer wants, and what they expect to see delivered, through to the challenge of developing a business case, supporting cost model, project management, design, development and implementation – meaning that all too often, a simple idea can quickly go from a “great idea” to a “nightmare” as organisations try to weave their way through the seemingly endless options, and how to implement even a basic proof of concept ( PoC ) environment or a “sandbox” to start playing on.

NOT ALL ASPECTS OF DATA SCIENCE ARE SEXY

Assuming we do in time reach a set of decisions around exactly what is required in the Data Science “stack” to provide a safe, secure, easy to use, consistent platform to a clients organization, I would usually set upon the challenge of standing up a complex ecosystem as quickly and cost effectively as possible. For the most part, this is about as far removed from the sexy part of Data Science as you can get – the engineering elements, most of which are often just hard work, albeit a necessary evil to get to the end goal as it were.

To stand up a big data ecosystem of any scale, I’d require network, storage and compute infrastructure in some form, either in a public cloud, on premise, or in a 3rd party data centre of some form, be it physical or virtual, and depending on the volume of data being moved around, ideally as close to the customer network as possible.

FIRST THE NETWORK

To connect any of this ecosystem to the world and or the customers own network, I’d need telecommunications providers, networks, IP address space, IP routing, IP sub-netting, domain names, bandwidth, routers, switches, firewalls, firewall polices, firewall rules, a list of TCP/IP and UDP/IP ports to open and close, multi-factor authentication with physical or software security tokens, intrusion detection, intrusion inspection, and usually some form of monitoring for this network stack.

This mind you, is merely what’s required to connect the environment to the internet or the customer network – I won’t torture you with the complexities of network typologies employed in-rack or inter-rack to connect the storage and compute nodes, that would be cruel and unusual punishment. IBM has an entire library of Red Books you can search to get up to speed with that if you have the time and inclination.

NOW STORAGE & COMPUTE

Next I’d need to procure and build servers, storage, uninterruptible power supplies, racks, mount the servers in said racks, implement a mix of physical and logical access control, install and configure operating systems, create system, application and user accounts and passwords and usually pre-shared keys, install and configure a myriad of software such as programming languages, tools, libraries, modules, plug-ins, a hybrid of the Apache Hadoop Distributed File System ( HDFS ), various parts of the Apache Hadoop “map reduce” based ecosystem with tools like Hive, Pig, Impala, and my personal favourite, the Swiss Army knife of such an ecosystem, the Hadoop User Experience aka HUE.

And more often than not these days, that stack would also include Apache Spark. I’d also of course need a Java run-time environment or Java software development kit for it to run on, I’d also need operating system services, application services and user activity logging and monitoring tools, admin tools, a web server platform or two, and that’s before I even get to the point of being able to ingest some data, cleanse that data, normalize said data in various ways, wave my magic ETL & ELT wand over it a few times, and the list goes on and on. And that’s just the tip of the proverbial large block of solid liquid in a sea of potential troubles.

CAN’T YOU USE A FREE EVAL SANDBOX FOR THAT

Some might argue “oh I can do that with any of the Hadoop distro’s evaluation sandbox offerings in a public cloud in a few hours”, and yes that’s true, and for many PoC’s that may provide what you need, but for most projects I’ve found that free cloud hosted sandbox offerings generally don’t meet the core requirements of what most customers need once they go beyond one user on a laptop or a few GB of data.

Often to gain rapid deployment using a pre-built demo sandbox, I would end up sacrificing on so many basic things, which in time turn into brick walls I have work around in some way, all too often resulting in a kludge that would never make it’s way into a production scale implementation.

So yes it’s entirely possible a free cloud hosted sandbox could for some, offer a simple quick starting point, but for most solutions I’ve designed and built, I’ve found myself working around them often enough to want to build “from scratch” as it were.

BUT THERE IS A BETTER WAY

Over the past few months, I’ve had the honour of being provided access to a number of early adopter programmes within the IBM family, each of them allowing me to have pre-release access to a range of IBM’s new tools and platforms such as the BlueMix Platform as a Service ( Paas ) offering, the Data Science Experience platform, and now the Watson Data Platform, and I look forward to getting my hands on the new Watson Machine Learning service soon.

Of course, each time I’ve had such access, it means having to keep a number of exciting secrets as it were, about the respective platforms, until they were publicly announced.

But now as of the 2016 New York city #DataFirst Data Science Experience ( aka DSX ) launch, and the 2016 World of Watson event now in Las Vegas, and the fact that these new services I’ve had the privilege of playing with already, are officially announced, launched, and very much public knowledge, I feel compelled to share information about them, as each of these platforms have in turn given us so many more options to now provide value to organizations in far shorter time periods, at far lower cost, with almost none of the overhead previously experienced in the basic challenge of “standing up a platform”, to be able to engage in the exciting new endeavor of Data Science.

AN ENTIRE DATA SCIENCE PLATFORM AT THE END OF A URL

When you picture the engineering quagmire I outlined earlier, which all too often feels like a Herculean challenge and invariably actually is, the time effort cost and drama surrounding the challenge of merely building a platform before we can even get to the business of engaging in fundamental Data Science itself, you quickly gain some appreciation for just how exciting it is for folk like myself and my peers, working in the exciting and brave new world of Data Science, to remove that whole painful issue of building and configuring a Data Science “stack” as it were, to now leapfrog directly to the business of getting on with the actual Science part of Data Science without the Engineering party tripping us up each time. It’s a game changer, by no small measure.

“IBM just put an entire Data Science platform at the end of a URL. My big data platform is now a bookmark.”, Dez Blanchfield ( IBM Wow 2016 )

As I mentioned a moment ago, IBM formally announced and launched their Data Science Experience ( DSX ) platform a couple of weeks ago in New York city ( I had the honour to be part of that amazing event as well ), hosted in what some call the centre of the big data business universe, the heart and soul of the heady world of high finance and high frequency trading.

Today my lucky stars were again aligned, as I was privileged to be on the stage with Derek Schoettle when he formally announced the availability of the Watson Data Platform, the Watson Data Service, and the Watson Machine Learning service, ground breaking offerings in their own rights.

In making these types of services available through something as accessible as the ubiquitous web browser interface, IBM have dramatically shortened and simplified the route by which individuals and organisations can now gain access to the tools required to begin applying Data Science and Machine Learning to their own challenges around data analysis and decision making, by leveraging the natively integrated Data Science Experience, Watson Data Platform, Watson Machine Learning Service and the BlueMix cloud Platform as a Service.

In effect what IBM has successfully done, is they have delivered on the long over due and much desired promise of cloud based big data, analytics and machine learning services all in a single easy to use, affordable “single pane of glass” via the now ubiquitous web browser.

“With their Data Science Experience and Watson Data platforms, IBM has made Data Science as accessible as web-mail.”, Dez Blanchfield ( IBM WoW 2016 )

They have taken Data Science, Big Data, Analytics and Machine Learning and made it as simple and affordable as web-mail, and we’ve all seen and experienced the powerful paradigm shift web-mail brought to the challenge of gaining access to email, now in turn we have the same easy access and simplicity of use via a browser based platform for Data Science – and it’s a WoW moment ( pun intended ).

Three Key Pillars of Data Science

OK so I promised not to just excite you with what I believe are some of the biggest announcements in Data Science and Analytics to come from World of Watson 2016, but I also promised I’d touch briefly on three key pillars of Data Science that I had the pleasure of discussing on stage with Derek, so here they are.

1. LEARN

Built-in learning to get started or go the distance. A native feature in the IBM DSX is something they refer to as Community Cards. These are a standard template by which DSX users can share articles, data-sets, models, links, videos, almost any form of content, aimed at sharing information, knowledge and data, either privately and securely within their own teams and organization, or with the broader DSX user community and even beyond the DSX platform.

It’s ridiculously easy to publish a Community Card and share it even with folk outside the DSX platform, through simple mechanisms such as social media such as a tweet on Twitter or a post on your LinkedIn profile wall.

This may sound like a simple idea, and in many ways it is, but it is a very powerful feature which could easily be overlooked, but I consider it one of the three most important pillars of Data Science as a whole, and in particular of the IBM Data Science Experience, for learning in general and in turn sharing our learning, is surely one of the core tenets of both the Data Science community and the broader open source community as a whole. I invite you to keep this core ideology in your top three key pillars of any Data Science journey.

2. CREATE

To allow us create with ease, IBM offer through the DSX & WDP the best of open source and IBM products. Once upon a time when the name IBM came to mind, the last thing you’d follow it by was open source, but those days are long gone. Yes IBM does build some of the biggest proprietary software platforms on the planet, but they are now also officially the largest contributor on the planet to open source, in particular the Apache Spark project.

And with that transition has come a significant shift in culture and behavior, and it’s a shift we should congratulate IBM for, as it’s come about in record breaking time, and the impact and positive repercussions are almost immeasurable. One area where we can measure that positive impact though is the power to create through a common single integrated Data Science & Analytics platform.

When you remove every possible barrier to your teams being able to jump directly to creating things, content, data, code, models, or collaboration opportunities, you can easily in turn place a value on that time saved, the productivity which it enables, and the dramatically reduced “time to value” your organization is gaining. So it is with that in mind I invite you to ensure that the ability to “create” remain in your top three key pillars of any Data Science initiative and that you consider putting a value on the benefits gained and time saved, as a result of the power to create quickly securely and collaboratively.

3. COLLABORATE

Community and social features that provide collaboration are paramount to the success of any Data Science initiative. Until the full force of what’s often referred to as Web 2.0 ( pronounced two dot oh ) came into effect, the true power of collaboration was in so many ways constrained to old school in person or small team efforts through email, conference calls, and the likes of intranets.

With recent developments in web technology we have seen websites like search engines be able to do keyword prediction and search term completion in real time, we’ve seen social media sites enable real time voice, video, chat and file exchanges in real time, and the likes of Google hangouts and WebRTC took what was once very expensive and cumbersome video and voice conferencing models and put them directly into near zero cost web browser interfaces and made them available to the great unwashed masses around the world.

When all of that is bundled natively into a Data Science platform, the ability to create a collaboration work-space, a web based notebook, code in R or Python, seamlessly use connectors to access and import data, both locally on your laptop or server as well as remotely across your network, your own business systems and across the public internet to private data if you have the relevant access or public data-sets, of which there are now millions across every imaginable key industry and market segment, all add up to immeasurable power to collaborate like never before.

Add to that drag and drop capabilities from your own resources, or shared resources from your own team or teams, from within your own organisation, or again from external sources outside your own firewalls, be it private data you have been given access to or public data, and then be able to share your own work, your own code, notebooks, and models, at the click of a mouse button ( or the touch of a finger on a tablet computer ), through social media or private invites, driving safe, secure collaboration in ways we’ve till recently only dreamed of. Well you get the picture.

The power of this type of collaboration is just mind boggling, and so many of us now take it for granted, but again I invite you to recognize just how powerful collaboration actually is, to encourage it, nurture it, and support it within your teams and across your organisation, and to keep collaboration in the top three phrases you use when referring to Data Science in any form. The rewards for doing so are such a game changer that I get shiver when I think about how clunky collaboration was before we had the likes of the tools integrated natively into IBM’s DSX & WDP.

SO WHERE TO FROM HERE

Well I’m glad you asked. I’d like to leave you with one last thought, an invitation in fact, and that is, if you have not already signed up for an account on the IBM Data Science Experience, and had a taste of what is possible today on the platform, then please do put time in your calendar, block out an hour or two, and sign up, try it out.

Once you have signed up, have a good look around, and play with it, run some of the pre-built demos, and checked out the examples and community shared articles, the free data-sets to play with, and given yourself the chance to make a fully informed decision about your own Data Science journey, you will be unlikely to ever want to build your own “stack” again, you’ll know that there’s a better way.

In short, don’t take my word for it, go try it out, prove to yourself by getting hands on. Find out for yourself if I “drank the cool aid”, or if I indeed correct in thinking that these exciting innovations from IBM are a bona fide game changer, if not in fact a complete paradigm shift, from the old to the new. Prove me wrong if you will, but I suspect that in your first hour on the platform, you may just find yourself staring at the screen as I have many times, thinking ( possibly out loud ) “can it really be this easy?”.

I look forward to hearing what you think, let me know in the comments section below, as I’d love to hear your feedback, and I’d dearly love to see a healthy debate ensue as I’m sure it no doubt will. Go forth and Learn, Create, and Collaborate. Your time starts.. now!!

Continue Reading

AI

Rebounding for the data-driven fate with FPGA and eASICS

Published

on

Field Programmable Gate Array Technology

Data is now the new driving force of the modern world. How well your business performs or what its rank could be on the performance list entirely depends on how you leverage the data available, involving emerging tools such as ML, AI, and cloud! Such forces have, in turn, lit up a stand for Field Programmable Gate Arrays (FPGAs). 

Lately, I have had a great fortune to sit down and have an in-depth discussion with Jim Dworkin, the senior director of the cloud business unit in the Programmable Solutions Group at Intel. During our discussion, even Jim asserted that in order to unlock the potential of data, we need to embrace the latest FPGA technology. 

Further, having the modern architecture out into the right place now could easily uncover the paths to get things right. Perhaps, we need to educate ourselves on the ‘hardwiring’ of data flow to ensure we can appropriately leverage the power of data, speed time to market, diminish the costs of ownership, and a lot more that could take the businesses to new heights. 

For example, the technology Intel has been offering has virtually evolved “off-the-shelf”, so competent than ever before that it can now solve specific infrastructure or business problems.

With the embracement of FPGA latest technology and eASICs(the Intel tech discussed above), there has been an acceleration in infrastructure use cases (SmartNICs). So, what is SmartNICs? Well, SmartNICs is a programmable accelerator. It holds the capability to centre all the networking data with the utmost security and proffering storage flexibility and efficiency at the same time. 

Having SmartNIC onboard businesses hold enough power to handle more refined infrastructure workloads using cloud hosts, churning of wastage of time, and saving more resources. Besides, SmartNICs also furnish great value towards nurturing virtualized assistance, such as multi-tenant shared cloud and more.

Perhaps with hyper-scalers’ mushrooming, the overhead of network infrastructure might turn daunting. But the applications of FPGA have helped manage that. 

Apart from this, Intel has also come up with FPGA cloud SmartNIC platforms that replicate the hyper-scalers’ used architectures. So, how does it operate? 

This platform integrates Intel high-performance Stratix 10 FPGA with an Intel Xeon D processor that works together on the SmartNIC card, enabling virtual switching by offering the Tier-2 data centres a mass-market solution. 

Intel has also been heavily sponsoring more efficient AI via recommender systems and natural language processing. It has even established a more robust form of FPGA, which is able to interpret voice coder inputs. 

Jim contends that the enactment of a GPU manages to be modal and established on the micro-architecture constructed around it, irrespective of their power. Therefore if it shifts from an optimization point, latencies might rise, negatively impacting the performance of speech processing. 

FPGA applications are virtually inexhaustible, particularly with FPGA transition reaching up to par with software programming in ease of usage. 

Jim is optimistic about exploding evolution. He believes people wouldn’t be asking what SmartNIC platforms are. Instead would be keener towards knowing how transformative it could be. But if you ask me, I would still say the real excitement lies in accessing Intel’s technology and then jumping to Microsoft Azure to revise and enjoy leaner and faster service completely.

With his extensive product knowledge of large-scale integration work, Jim puts it; we must decode problems at a strategic level and not in a microcosm.

Further Reading

Continue Reading

AI

Conversation With Jim Dworkin, Senior Director Cloud Business Unit, Intel PSG

Published

on

I recently had the opportunity to catch up with Jim Dworkin, who is the Sr. Director of Cloud Business Unit at Intel PSG, to discuss the recent news, insights, FPGA trends, and offerings surrounding Field Programable Gate Arrays (FPGAs) from Intel and related topics such as Data Centres, Infrastructure from Servers to Networks, to the Internet of Things, Edge networking  Artificial Intelligence, compute and much more.

In this episode of our podcast show, Jim and I delve into a wide range of business and technology insights around how key CXO and Senior Business & Technology decision-makers can obtain immediate real-world business benefits by taking advantage of the tremendous technology and supporting ecosystem or partners, integrators, and others and Intel teams globally – this show covers many recent trends in FPGA is you can not miss, please do tune in it today!

Here are a few of the important points from our show:

  1. Latest macro FGPA Trends driving development/adoption of FPGAs/eASICs

We kick off with Jim sharing insights around what he and his team at Intel are currently seeing worldwide, as far as the latest macro trends driving the development/adoption of FPGAs/eASICs are concerned.

Jim also clarifies what an FPGA is, what Intel’s eASICS are, and where they each fit in the respective spaces around development, design, implementation, going into production, and more – a phenomenal overview to set the scene for this fantastic discussion.

  1. Obstacles & happenings around the adoption of FPGAs/eASICs and market readiness

I ask Jim to share his take on the key hurdles & opportunities he and his team at Intel PSG, and related teams at intel, are seeing worldwide concerning the uptake and adoption of FPGAs/eASICs and market readiness.

  1. How Intel customers/partners see success with FPGAs/eASICs

Jim gives us an extraordinary briefing level summary of how Intel customers and partners see success with FPGAs/eASICs, as well as some great actionable takeaways listeners can put in place within their own organisations to gain real business and technology benefits over a wide range of key areas in both Information Technology as well as Operational Technology systems and environments.

This conversation covers a broad range of news and detail about Intel’s FPGA solutions business, and technology decision-makers should pay attention. PushPLAY now and tune into this great conversation. If you have any questions, reach out at any time via any of the usual channels such as Facebook, LinkedIn, Twitter, and others. We’d love to start a conversation and perhaps connect you with the best people at Intel to support your organisation’s outcomes.

This podcast covering Intel FPGA News was created in association with Intel.

Explore:

 Intel® FPGA Homepage: https://intel.ly/3gRRXm5

– Real-Time Text To Speech Synthesis Using Intel® Stratix® 10 NX FPGA (Video): https://intel.ly/37pjDLS

– Real-Time Text To Speech Synthesis Using Intel® Stratix® 10 NX FPGA (White Paper): https://intel.ly/3mo5PW3

– Pushing AI Boundaries with Scalable Compute-Focused FPGAs (White Paper): https://intel.ly/3gRZLnI

 

#sponsored, #intelinfluencer, #intel, #fpga, #easic, #asic, #edge, #xeon, #processor, #platform, #cpu, #gpu, #ai, #ml, #dl, #artificialintelligence, #deeplearning, #machinelearning, #bigdata, #analytics, #datascience, #iot, #device, #sensor, #networks, #telco, #mobile, #telecoms, #data, #protection, #security, #cybersecurity, #5G, #strategy

Continue Reading

5G

Telecom Security Innovations will allow telcos to offer security

Published

on

Telecom Security Innovations & Services

It’s an exciting time to be involved in telecommunications. We have witnessed a major push towards remote work this year and telecom companies, both on the provider and supplier side, are scrambling to meet new demands. The growth is being driven by an exponential bump in traffic from both human and IoT footprints. To add to it all, many telcos are gearing up to launch 5G services as well – but, like all things in life, it’s a mixed bag.

The rise in digital activity across the globe has also been accompanied by a nearly unbelievable surge in cyberattacks, much of which is highly sophisticated and increasingly targeted at enterprise rather than individuals.

Many organisations and business leaders had a tough time in 2020 dealing with a relentless spate of virus, ransomware, phishing, and DDoS attacks. For their part, telecom providers have been forced to reconsider the limitations that come with legacy infrastructure, especially when it comes to ensuring the security of mission-critical data.

Also consider the sheer scope of change in business models for carriers and providers, if they could finally move away from their pitched battle around pricing. By offering high-value, premium security solutions that guarantee peace of mind, the telecom industry can create something customers will be willing to pay a higher price for: reliability.

The need to rethink security from an infrastructure perspective

The complexity of security issues has moved beyond ‘gatekeeping’ firewall solutions in the core. Think about it. Legacy security solutions are largely based on sampled traffic. The traditional firewall sits in the core network and is generally too busy dealing with high traffic volumes to do much beyond a basic source-destination check – leaving the network vulnerable to malicious content housed undetected in particular packets of communication.

These solutions are typically slow to detect and mitigate the kind of advanced attacks that are increasingly prevalent in 4G networks and will be “de rigour” in 5G. While enterprise networks can be enmeshed in multiple layers of security, that technique is simply not tenable for telecom networks that have too many connections going in every possible direction to be effectively protected.

The problem gets magnified and compounded for 5G. Picture the huge variety of devices feeding into the networks, ranging from very high-speed mobile broadband to numerous complex and connected IoT devices, vehicles, autonomous drones and more. To put this in perspective, each of these devices is estimated to generate 20-gigabits of traffic per second that all needs to be routed and checked for security. In the case of an attack, the volume of data generated can increase manifold and significantly stress system resilience.

Telco Security Innovations initiative takes security to the next level

Telcos understand this only too well. And it’s not surprising that in recent surveys, the need for advanced security in the manufacturing of application delivery controllers trumped evergreen asks from telecom providers like lower latency, higher capacity, and throughput.

This is why I was so excited to talk with Folke Anger, Head of Solution Line Packet Core, Ericsson Digital Services and Yasir Liaqatullah, Vice President Product Management, A10 Networks to discuss an interesting innovation around the security of 5G Core technologies – the Packet Core – a high-performance cloud-native firewall.

Building security into the DNA of 5G Core infrastructure

With CSPs moving from centralised data centres to edge cloud, the threat landscape has evolved to a point where attacks need to be mitigated as they arise. This means bringing down the scale of response time from minutes or seconds to milliseconds. That’s physically impossible to achieve on legacy infrastructure, so Ericsson thought about the problem differently.

They combined their cloud-native principles with the design of the user plane and built-in its Packet Core Firewall, powered by A10 Networks’ security capabilities, by adding micro-services into the user plane in Packet Core Gateway. The result is a fully integrated security solution that eliminates the need for additional cloud-native functions, separate management or multiple instances.

The Packet Core solution is completely unique in terms of embedding security within the data plane. It’s fully automated and backed by ML in the form of artificial intelligence. It also requires minimal human intervention – all of which result in millisecond level mitigation of even advanced threats.

Opening up a new horizon for telecoms and carriers

The implication of security built into the DNA of the 5G infrastructure is huge for the telecommunication industry.

For one, the resiliency brought in by an integrated security solution ensures that the infrastructure is strong enough to reconfigure and re-spawn itself in case of an attack and continue to function with minimal impact on latency.

It also offers granular security by monitoring all connections with full visibility and can detect threats as they appear and take corrective action – ensuring minimal human intervention and mitigation of attacks in milliseconds. It’s integrated with automation systems and can easily scale to keep pace with higher traffic volumes.

Implementing an integrated security solution will result in lower TCOs for service providers. That, by itself, should be a huge benefit for companies negotiating various partnerships to negotiate the high costs of implementing 5G infrastructure. But it holds out scope for something much more important.

In effect, this innovation can finally offer what telecoms and carriers have been craving for years – a solid differentiator in terms of the value they offer to customers. The pricing battle that telecoms and carriers have been stuck in for years can finally end as they choose to evolve to offer more premium offerings to meet the core demand of many customers on faster networks – complete, reliable security in mission-critical applications.

Eventually, we might see security-as-a-service being bundled as a value add-on to service packages, but given the current threat landscape, security can be the differentiator that sets apart exceptional enterprise service providers from the rest.

I want to thank Folke Anger and Yasir Liaqatullah, and all the wonderful people at Ericsson Digital Services for making this interview possible. Please tune in to the conversation at the link below, and use the other resources to learn more about the technology and innovations we discussed.

Continue Reading

Trending On Elnion

Copyright © 2021 ELNION ONLINE - All rights reserved.