Connect with us

AI

If Big Data is beautiful, then Machine Learning is pure magic!

Published

on

As you know, I’m slightly obsessed with machine learning (ML). Whenever I talk to other technologists, I can’t help but probe into their perspectives; what are they using ML for, how far have they gotten, what’s real right now, and what’s coming next?

Most recently, I was talking to Oliver Sturrock, CTO at Fluke Digital Systems, during a Conversations with Dez podcast – you can listen to the whole episode here. Oliver heads the engineering department at Fluke responsible for building their connected reliability framework, called Accelix. Their purpose is to connect reservoirs of machine health data and enable predictive analysis.

We talked about the progress Fluke has made in applying machine learning to all the maintenance and reliability data that their handheld tools and sensors vacuum up. After all, it wasn’t so long ago that all that measurement data just stayed on the test tool, and now it can all go up into the cloud.  

Oliver said yes, they are trying to get to the point where machine learning, with enough measurement data and the right algorithms, can mimic the “tribal knowledge” of a person with 30 years of maintenance experience. The grand vision is to be able to predict equipment failures and schedule both the maintenance resources and the appropriate parts to make the repairs ahead of time.

But there are considerations…

To start with, the technology solution can’t look like magic. As Arthur C. Clarke said “A sufficiently advanced technology is indistinguishable from magic.” But when all the team sees is a black box, all they can do is react to the decisions it makes. That’s exactly the kind of scenario that concerns a lot of customers, it’s why they don’t want to give up experiencing the data directly. They have to have reason to trust the solution.

Part of that trust is quality of incoming data. OEMs are publishing big data sets now that detail what their assets look like when they are operating normally. Systems like Accelix should be set to ingest all that data into their model and compare it to field conditions, baselines at installation, etc. “This is what our device looks and sounds like when it’s normal, this is the vibration signature, this is the temperature.”

But. Oliver called out that there are a LOT of environmental factors to account for – the equipment performs differently when connected to specific types of loads in various environments; how well has it been maintained and have the parts been replaced on time; even the differences between the data signatures of various sensors can throw your model off. So. Fluke is aiming to get 80% of the way there with out-of-the-box data and then the other 20% has to come from infield learning.

Even so, you still need to make the black box as see-through as possible. The single greatest frustration that companies I know have is the closed data environments they encounter – propriety data streams that don’t API. It shouldn’t be that way. It should all be built around APIs and interconnectedness, a lot less forced.

Oliver was emphatic that Fluke has to be part of an ecosystem. The customer isn’t using just Fluke products for its measurement now, and Fluke can’t possibly build every type of analytics and data monitoring that’s ever going to be needed. But, says Oliver, there is definitely a role for Fluke to play in bringing all of it together – the various data streams and analytics.

There’s an airport I know of who has done the digital transformation, and you can tell they have because they are thinking in terms of what can they learn from the data. How can they plan the next five years of funding for maintenance? How can we market the value prop that our airport is now safer thanks to its greater reliability and uptime? And then other parts of the organisation start to wake up to what this capability is delivering and it gets even bigger.

I think there’s a very brave future ahead of us. I love that where ever I go, the systems around me are getting safer. It’s getting easier for maintenance people to keep equipment in good, reliable, SAFER operating conditions. That’s a better world.

Learn more by listening to the entire recorded conversation between Oliver Sturrock myself, or at the Connected Reliability pages on Fluke.com

AI

FPGA and eASICS – Preparing for the data-driven future at a systemic level

Published

on

With the advent of data as the primary driver of business intelligence, the future of businesses across verticals are now dependent on how they can differentiate themselves to acquire and fine-tune their data using tools, such as AI, ML and deep learning, and, of course, through leveraging the cloud. This, in turn, has set the stage for a keen interest in Field Programmable Gate Arrays (FPGAs). 

I recently had the good fortune to sit down with one of the key business leaders currently associated with development in the space – Jim Dworkin, senior director of the cloud business unit in the Programmable Solutions Group at Intel.

And as Jim put it, our ability to act on all the data being generated in a data-driven society is really dependent on unlocking the potential of that data through smart insertion of FPGA technology to prevent application processors from getting ‘clogged.’ Then we can unlock the benefits at scale in data centre applications. 

The architecture being put into place now will determine the pathways of how we use data (including all the potentials and limitations) for years to come. 

This is why it’s crucial for businesses at this point to really reach out and educate themselves on the ‘hard wiring’ of data flows to build strategic pathways that reduce data movement, accelerate processing, speed time to market, lower total costs of ownership, add value for consumers, and manage all of this in an ecologically responsible fashion. The extent of technical and business benefits really depends on the access to and application of this knowledge.

For instance, the technology being offered by Intel has effectively become ‘off-the-shelf’ now, even as it remains open to be adapted to solve specific infrastructure or business problems. Their trademarked eASIC (application specific integrated circuit), for example, comes prebuilt to a certain extent and then, it can be customised by businesses as per their application needs to connect the circuits up together. 

There is an increasing blurring of lines between what would be eASIC and FPGA, as the former can now essentially function as a system on chip with millions of ASIC gates and a 16 nm processor with the processing power of a quad-core CPU – the same CPU found in Intel FPGAs!

You can even start with an FPGA and convert it to eASIC to reduce power consumption and unit cost. The business benefit is, of course, not having to spend an average of two years on toolset development to build the circuit from scratch to get more compute per watt. You can get the eASIC up and running at 10% of the nonrecurring engineering cost of an ASIC. The adoption of FPGAs and eASICs is being driven by use cases in infrastructure (SmartNICs) and application acceleration.

With SmartNICs’ compute capabilities, it is now capable of handling more sophisticated infrastructure workloads in cloud hosts (for example AWS, or Microsoft Azure), to minimise time spent every CPU cycle on infrastructure management, and instead monetise these resources by renting out those cores. SmartNICs also provide incredible value towards enabling virtualised services, such as multi-tenant public cloud etc.

Moreover, with the mushrooming of hyper-scalers, the network infrastructure overhead can get daunting. The deployment of FPGA at scale has helped manage that, as we have seen in the case of Microsoft deploying more than a million Intel FPGAs in Microsoft Azure servers. 

Intel also recently launched an FPGA-based cloud SmartNIC platform that replicates the architecture used by hyper-scalers. This platform combines Intel high performance Stratix 10 FPGA with an Intel Xeon D Processor. These two devices work together on a SmartNIC card to create a mass market solution for Tier-2 data centres to enable virtual switching and more. 

Intel is also investing heavily into natural language processing and recommender systems to enable more efficient AI and have just launched a more powerful avatar of the FPGA that is capable of taking voice coder inputs and increasing the channel density significantly.

Jim asserts that the performance of a GPU, no matter how powerful, tends to be modal and based on the micro-architecture built around it. If the data model shifts from the GPU’s optimisation point, then latencies can go up and speech processing performance suffers, which can devastate a real-time system. The spatial architecture in an Intel FPGA is able to perform resiliently and tolerate modal and even parameter changes and leads to interesting applications in the AI space.

The applications of FPGA are virtually limitless, especially with FPGA development coming up to par with software programming in ease of use. FPGAs are well and truly ready to create solutions to problems we may not really have noticed yet and stimulate the development of applications and IP.

Jim is buoyant about exploding growth in the infrastructure acceleration space and hopes that the platform will be transformative in its impact. He throws us a hint with Microsoft’s recently released paper that discusses various AI use cases. 

And certainly with Microsoft’s fantastic tool chain and scale, we could be expecting something that bridges right from the Azure core to the edges and even the endpoints. There is tremendous interest in the banking FSI sector to use AI and ML in doing risk analysis of financial transactions and much more as well as in blockchain acceleration and the currency trading space – pretty much any industry that has undergone rapid transformation and is starting to build up its standards.

For me, personally, the real excitement lies in being able to access Intel’s technology by just paying $25 and jumping on Microsoft Azure to completely alter and make my product/ service leaner, faster or just make people’s lives better.

For CIOs, CEOs and network administrators trying to sort out architectural redundancies and lower total cost of ownership through data centre efficiencies, you now have Intel’s platform that will let you blend storage and network acceleration use cases on the same device.  

As Jim with his vast formative experience of large-scale integration work puts it – we need to solve problems at a system level and not in a microcosm.

 Further Reading

Continue Reading

AI

Jim Dworkin, Senior Director Cloud Business Unit, Intel PSG

Published

on

I caught up with with Jim Dworkin, Sr. Director of Cloud Business Unit at Intel PSG, to talk about the latest news, trends, insights and offerings around Field Programable Gate Arrays (FPGAs) from Intel and surrounding areas from Data Centres, Infrastructure from Servers to Networks, to the Internet of Things, Artificial Intelligence, Edge networking and compute and much more.

In this show we have a fantastic combination of business and technology insights into how key CXO and Senior Business & Technology decision makers can gain real world immediate gains by leveraging the amazing technology and supporting ecosystem or partners, resellers, integrators et al and the Intel teams around the world – this is not a show to miss, just push play now.

Our conversation covers the following and much much more:

1. Current macro trends driving development/adoption of FPGAs/eASICs

We kick off with Jim sharing insights around what Jim and his team at Intel are currently seeing currently around the world as far as current macro trends driving development/adoption of FPGAs/eASICs.

Jim also covers a clarification of what FPGAs are, what Intel’s eASICS are and where they each fit into the respective spaces around design, development, implementation, to going into production and more – a fantastic overview to set the scene for this amazing conversation.

2. Challenges & opportunities around adoption of FPGAs/eASICs and market readiness

I ask Jim to give us his take on the key challenges & opportunities he and his team at Intel PSG and related teams are seeing around the world regarding the adoption of FPGAs/eASICs and market readiness

3. How Intel customers / partners are finding success with FPGAs/eASICs

Jim goes on to map out a great briefing level overview of how Intel customers / partners are finding success with FPGAs/eASICs, and some wonderful key takeaways listeners can take away and action within their own organisations either right now, or in the short to medium term to gain business and or technical benefits across a broad range of key areas in both IT and OT systems and technologies.

This is a MUST LISTEN conversation, push PLAY and join the conversation. If you have any questions please do reach out via any of our other channels including LinkedIn, Facebook, Twitter et al, we would love to help connect you with the best people to support your organisations outcomes.

This podcast was made in partnership with Intel.

For more information please visit:

– Intel® FPGA Homepage: https://intel.ly/3gRRXm5

– Real-Time Text To Speech Synthesis Using Intel® Stratix® 10 NX FPGA (Video): https://intel.ly/37pjDLS

– Real-Time Text To Speech Synthesis Using Intel® Stratix® 10 NX FPGA (White Paper): https://intel.ly/3mo5PW3

– Pushing AI Boundaries with Scalable Compute-Focused FPGAs (White Paper): https://intel.ly/3gRZLnI

#sponsored, #intelinfluencer, #intel, #fpga, #easic, #asic, #edge, #xeon, #processor, #platform, #cpu, #gpu, #ai, #ml, #dl, #artificialintelligence, #deeplearning, #machinelearning, #bigdata, #analytics, #datascience, #iot, #device, #sensor, #networks, #telco, #mobile, #telecoms, #data, #protection, #security, #cybersecurity, #5G, #strategy

.

Continue Reading

5G

Innovation paves the way for telcos to offer security as a value differentiator

Published

on

It’s an exciting time to be involved in telecommunications. We have witnessed a major push towards remote work this year and telecom companies, both on the provider and supplier side, are scrambling to meet new demands. The growth is being driven by an exponential bump in traffic from both human and IoT footprints. To add to it all, many telcos are gearing up to launch 5G services as well – but, like all things in life, it’s a mixed bag.

The rise in digital activity across the globe has also been accompanied by a nearly unbelievable surge in cyberattacks, much of which is highly sophisticated and increasingly targeted at enterprise rather than individuals.

Many organisations and business leaders had a tough time in 2020 dealing with a relentless spate of virus, ransomware, phishing, and DDoS attacks. For their part, telecom providers have been forced to reconsider the limitations that come with legacy infrastructure, especially when it comes to ensuring the security of mission-critical data.

Also consider the sheer scope of change in business models for carriers and providers, if they could finally move away from their pitched battle around pricing. By offering high-value, premium security solutions that guarantee peace of mind, the telecom industry can create something customers will be willing to pay a higher price for: reliability.

The need to rethink security from an infrastructure perspective

The complexity of security issues has moved beyond ‘gatekeeping’ firewall solutions in the core. Think about it. Legacy security solutions are largely based on sampled traffic. The traditional firewall sits in the core network and is generally too busy dealing with high traffic volumes to do much beyond a basic source-destination check – leaving the network vulnerable to malicious content housed undetected in particular packets of communication.

These solutions are typically slow to detect and mitigate the kind of advanced attacks that are increasingly prevalent in 4G networks and will be “de rigour” in 5G. While enterprise networks can be enmeshed in multiple layers of security, that technique is simply not tenable for telecom networks that have too many connections going in every possible direction to be effectively protected.

The problem gets magnified and compounded for 5G. Picture the huge variety of devices feeding into the networks, ranging from very high-speed mobile broadband to numerous complex and connected IoT devices, vehicles, autonomous drones and more. To put this in perspective, each of these devices is estimated to generate 20-gigabits of traffic per second that all needs to be routed and checked for security. In the case of an attack, the volume of data generated can increase manifold and significantly stress system resilience.

Telcos understand this only too well. And it’s not surprising that in recent surveys, the need for advanced security in the manufacturing of application delivery controllers trumped evergreen asks from telecom providers like lower latency, higher capacity, and throughput.

This is why I was so excited to talk with Folke Anger, Head of Solution Line Packet Core, Ericsson Digital Services and Yasir Liaqatullah, Vice President Product Management, A10 Networks to discuss an interesting innovation around the security of 5G Core technologies – the Packet Core – a high-performance cloud-native firewall.

Building security into the DNA of 5G Core infrastructure

With CSPs moving from centralised data centres to edge cloud, the threat landscape has evolved to a point where attacks need to be mitigated as they arise. This means bringing down the scale of response time from minutes or seconds to milliseconds. That’s physically impossible to achieve on legacy infrastructure, so Ericsson thought about the problem differently.

They combined their cloud-native principles with the design of the user plane and built-in its Packet Core Firewall, powered by A10 Networks’ security capabilities, by adding micro-services into the user plane in Packet Core Gateway. The result is a fully integrated security solution that eliminates the need for additional cloud-native functions, separate management or multiple instances.

The Packet Core solution is completely unique in terms of embedding security within the data plane. It’s fully automated and backed by ML in the form of artificial intelligence. It also requires minimal human intervention – all of which result in millisecond level mitigation of even advanced threats.

Opening up a new horizon for telecoms and carriers

The implication of security built into the DNA of the 5G infrastructure is huge for the telecommunication industry.

For one, the resiliency brought in by an integrated security solution ensures that the infrastructure is strong enough to reconfigure and re-spawn itself in case of an attack and continue to function with minimal impact on latency.

It also offers granular security by monitoring all connections with full visibility and can detect threats as they appear and take corrective action – ensuring minimal human intervention and mitigation of attacks in milliseconds. It’s integrated with automation systems and can easily scale to keep pace with higher traffic volumes.

Implementing an integrated security solution will result in lower TCOs for service providers. That, by itself, should be a huge benefit for companies negotiating various partnerships to negotiate the high costs of implementing 5G infrastructure. But it holds out scope for something much more important.

In effect, this innovation can finally offer what telecoms and carriers have been craving for years – a solid differentiator in terms of the value they offer to customers. The pricing battle that telecoms and carriers have been stuck in for years can finally end as they choose to evolve to offer more premium offerings to meet the core demand of many customers on faster networks – complete, reliable security in mission-critical applications.

Eventually, we might see security-as-a-service being bundled as a value add-on to service packages, but given the current threat landscape, security can be the differentiator that sets apart exceptional enterprise service providers from the rest.

I want to thank Folke Anger and Yasir Liaqatullah, and all the wonderful people at Ericsson Digital Services for making this interview possible. Please tune in to the conversation at the link below, and use the other resources to learn more about the technology and innovations we discussed.

Continue Reading

Trending On Elnion

Copyright © 2021 ELNION ONLINE - All rights reserved.