Connect with us

Mainframe

Mainframe Modernization is a Non-Sequitur

Published

on

Whenever anybody uses the term “mainframe modernization” I sort of wince and look at them like they don’t know what they’re talking about. Because they really don’t. You see, there is no need… the mainframe already IS a modern platform. It is not the mainframe, where the current state of the art is the IBM z16, that needs to be modernized. Applications running on the mainframe may need to be modernized, sure. This means that we need to train these people to call it what it is – Application Modernization!

Don’t Misunderstand…

For all intents and purposes, the age of the mainframe began in April 1964, when IBM announced the System/360 mainframe. All IBM mainframes since 1964 have evolved from the System/360 and its architecture. There have been numerous changes in the 50+ years since then because IBM constantly updates its mainframe technology, hardware, and infrastructure to ensure that it delivers high availability and speed with modern development and processing capabilities.

And that means over the course of the mainframe’s life, customers have developed a large financial investment in their mainframe applications and data. Many of these applications were developed and refined over decades. Some were written many years ago, while others may have been written just “yesterday.”

So, of course, there is old code running on the mainframe… some of it written perhaps 50 years ago or more. And some of that code could benefit from modernization. But the platform itself, the hardware and system software, is quite modern.

What Do You Mean, Modern?

Approximately every 4 years or so IBM releases a new System z mainframe, and the IBM z16 was just unveiled earlier this year (2022). The IBM z16 provides 17% more processor capacity per CPC drawer compared to the last model, the IBM z15.

The highlight of the new IBM z16 is the IBM Telum chip, which serves as its central processor chip. Built atop 7nm chip technology, the IBM Telum is designed with on-chip acceleration for AI inferencing that will enable deep learning inference at scale. The on-chip AI scoring logic provides sub-microsecond AI inferencing for deep learning and complex neural network models. The IBM z16 can handle process 200 billion inference requests per day within one millisecond of latency.

Furthermore, the IBM z16 can be configured with up to 40 TB (10 TB per CPC drawer) of addressable memory per system. The IBM z16 also delivers state-of-the-art I/O supporting up to 12 I/O drawers. Each I/O drawer can support up to 16 I/O or special purpose features for storage, network, and clustering connectivity, as well as cryptography.

When it comes to security and encryption, it is hard to beat the mainframe. With Pervasive Encryption, IBM customers can encrypt 100% of their data. And with Data Privacy Passports it is possible to use the mainframe to protect data on and off the platform.

Furthermore, with the upcoming advent of quantum computing and its heralded potential for breaking current encryption, the IBM z16 is the industry’s first quantum-safe system. The system’s quantum-safe secure boot technology helps to protect IBM Z firmware from quantum attacks through a built-in dual signature scheme with no changes required. IBM z16 quantum-safe technology and key management services were developed to help you protect data and keys against a potential future quantum attack such as the “harvest now, decrypt later” approach.

And we haven’t even mentioned Parallel Sysplex, which has been available for years, but nevertheless still delivers unheard of 99.99999% availability… yes, that is seven nines or an average of 3.16 seconds of downtime per year)! Although an in-depth discussion of Parallel Sysplex would take some time, suffice it to say that it is a clustering technology that enables multiple copies of z/OS to run as a single image from a single point of control. Parallel Sysplex lets you share data and applications across multiple systems. High availability is delivered with the ability to remove an image and make changes while applications continue to run. And IBM is making constant improvements, such as the enhanced ICA-SR coupling link protocol delivered with the IBM z16 that can improve read and lock requests by up to 10% and write requests by up to 25%.

Another availability innovation of the IBM Z is System Recovery Boost, which redirects all of the capacity in the system to starting (or restarting) your system. This means that zIIPs and other special purpose capabilities of the mainframe can be diverted to boost performance during startup, thereby minimizing downtime.

From a system software perspective, middleware like MQ, CICS, and WebSphere, and one of the industry leading DBMSes in Db2, are available to deliver top-notch, modern capabilities for application development. It is also possible to run Linux on the mainframe instead of, or in addition to z/OS. And for those looking for a more modern interface to the mainframe, the open source Zowe platform can be used to provide a more modern interface to z/OS than the character-based ISPF.

Truly, only the most stubborn readers will be unconvinced that the mainframe is, indeed, a modern platform.

So, What Needs to be Modernized?

Even with its rich history and continuing upgrades to keep the mainframe viable for modern workloads, there are valid concerns about how we can continue to keep the platform viable into the future. A concern voiced regularly is the aging workforce. Mainframers are getting older and organizations are just not replacing them successfully. We hear about a skills gap, where new hires are not capable of performing the duties of the recently (or soon-to-be) retired. There is also the challenge of integrating the mainframe with new technologies and capabilities. This can require a mix of skills that can be difficult to obtain.

As such, many organizations are beginning to focus on ways to modernize their application portfolios, such as migrating to more modern programming languages and integrating DevOps processes and procedures. These are valid techniques when undertaken with a well-thought-out plan and knowledge of your organization’s infrastructure/applications, as well as knowledge of the existing languages used (COBOL, PL/I) and the new languages being adopted (e.g. Java, C).

From a functional perspective though, keep in mind that your old code still works! Maintaining it may be difficult, but it isn’t going to just stop working any time soon!

So how can you modernize your mainframe systems? Industry analysts as Gartner outline 7 approaches that can be taken to modernize legacy systems. You can (1) encapsulate, which means you are extending features, data and functionality in the application and making them available as services using an application programming interface (API). You can (2) rehost, which redeploys an application component to another physical, virtual or cloud infrastructure without altering the code or modifying features and functions. Then there is (3) Replatforming, which is migrating an application component to a new runtime platform. Code changes are required but the goal is to minimize code changes to adapt to the new platform, without modifying the code structure or the features and functions it provides. Another approach is to Refactor (4), which means to restructure existing code without changing its external behavior while improving its features and structure. Rearchitecting (5) is a bit more intense. In this case you have to materially alter the application code so you can shift it to a new application architecture and fully exploit new and better capabilities of the application platform. Even more work is entailed in Rebuilding (6), which is rewriting application components from scratch while preserving its scope and specifications. And finally, there is Replacement (7) where the former application component is eliminated altogether and replaced, taking new requirements and needs into account.

Of course, each option is not always possible. Every platform and application you wish to modernize will have requirements that can make certain of these options more or less feasible. Nevertheless, completely rebuilding or replacing are typically not the best approaches when modernizing mainframe applications. After all, as one executive I talked to states “Why would you rebuild or replace what already works instead of tweaking it to make it more modern or accessible?”

The Bottom Line

According to resources cited by IBM, mainframes are used by 71 percent of Fortune 500 companies… and IBM mainframes handle 90 percent of all credit card transactions as well as 68 percent of the world’s production IT workloads. 44 of the top 50 banks and all top 10 insurers worldwide use mainframes. And despite such pervasive usage mainframes account for only 6 percent of IT costs!

As we learned here, IBM keeps up with technology and integrates modern capabilities into the platform on a regular basis. Big business relies on the mainframe to serve billions of transactions each day, and IBM, ISVs, and user organizations continue to support mainframe development with new practices and procedures.
As such, the bottom line is this: the mainframe is a modern platform, and it is here to stay!

Digital Enterprise

Data Professionals Ready to Head Back to In-Person Conferences

Published

on

The past couple of years have been a sort of barren wasteland for in-person tech conferences. Meeting in large groups was not wise during the COVID pandemic causing hotels and convention centers everywhere to go empty. However, it looks like things are beginning to open back up in the world of data-focused tech conferences in 2022.

2020 and 2021

Before we look at this year let’s first look back and reflect on the impact of COVID on the world of technology and data management conferences. Early in 2020, as the degree of the COVID pandemic became apparent, many tech conferences either were canceled or revamped.

One of the earliest events to be modified was the IBM Think 2020 event, originally scheduled for the week of May 4-7, 2020 in San Francisco. IBM canceled the in-person conference and announced a “Digital Event Experience” to occur May 5 -6. This amounted to what most conferences became during the pandemic – watching presentations over the web, like a bunch of webinars. Many other events followed suit, with IDUG Db2 Tech Conference, Microsoft Build, Salesforce Connections, Oracle World, and many others eventually delivering online versions of their events.

Instead of revamping to an online event, some conferences canceled their events; some for good! The first harbinger of things to come was the cancellation of the South by Southwest conference, originally scheduled for March 13-22, 2020. Industry analyst firm Gartner canceled or postponed all of its conferences for April through August 2020. And perhaps the biggest news of all, O’Reilly Media closed its conference business for good, which sounded the death knell for its annual Strata conference. Data management professionals will miss Strata as it was one of the only vendor-neutral events that focused on data and AI.

As 2020 turned to 2021 it became the modus operandi for conferences to be held primarily online. Some began to wonder if COVID would kill the in-person tech conference once and for all. For many reasons, I don’t see that happening.

For one thing, an online event is easy to disengage from. Sitting at your laptop watching a presentation is no substitute for an in-person presentation where there are no disturbances and you can easily interact with the presenter (via Q&A or even tracking them down after they speak). On the other hand, it is very easy to stop paying attention when you are watching online. I wonder what that email that just showed up was… let’s check… oh, I’ve been waiting for that {clicks a link and starts web browsing}… Was that the doorbell? Probably, because the dog is barking… and so on… none of those distractions exist when you are sitting in a room at a conference!

Furthermore, people like to interact with their peers to share and learn things. That is much easier to do at an industry event where everybody has similar interests. So, in-person events went away for a period of time because it was the prudent thing to do to stop the spread of COVID. But what about now?

2022

Well, the availability and effectiveness of COVID vaccines and people’s desire and willingness to attend conferences seem to mean that 2022 will see a resurgence in live, in-person tech, and data management events. I am not saying that threat of COVID is over (far be it from me as I am no immunologist or M.D.), but the general mood seems to be that it is once again time to attend conferences. Hopefully, we will do it safely.

Furthermore, I contend that an in-person product demonstration can be more illuminating and educational than similar online demos. And it is easier to ask questions and clear up confusing issues or technical problems face-to-face than it is over the web.

In March 2022, I attended my first in-person event since the COVID pandemic began: the SHARE conference in Dallas, TX. I was not sure what to expect, but the event was well-attended and people seemed genuinely enthused to be participating and interacting with their peers again. If this is indicative of the overall mood of the tech world then 2022 will probably be the year that you start attending your favorite data and tech events again. And from what I see, there will be no shortage of opportunities for you to attend.

Upcoming Data Management Conferences

Coming up soon is Data Summit 2022 taking place in Boston on May 17-18, 2022. This event is managed by Unisphere and bills itself as the data management and analytics conference. I have attended and spoken at this event in the past. It offers a lot of educational value, particularly in the realm of data management, analytics, and AI. There are co-running sub-events focused on DataOps and AI/ML that may be of interest.

Next up is the World Data Summit taking place May 18-20, 2022 in Amsterdam, The Netherlands. This three-day event hosts speakers knowledgeable on all aspects of data analysis, unstructured data, data visualization, and more.

One conference that I am very much looking forward to attending again is the North American IDUG Db2 Tech Conference, to be held this year in Boston, MA from July 11-14, 2022. If you use IBM’s Db2 DBMS then this is an event you won’t want to miss… not to mention that I will be delivering a couple of presentations there this year, too!

And I cannot mention IBM without also mentioning Oracle, can I? Well, this year Oracle OpenWorld has a new name – Oracle CloudWorld. And it has a new location, too! Although for many years Oracle held its events in San Francisco, this year the event will be in Las Vegas from October 16-20, 2022. There is a bit of irony in renaming an in-person conference to CloudWorld though! The online events held that past couple of years during the pandemic were more cloud-like virtual events. But I understand the reasoning, what with cloud computing becoming so popular these days.

Of course, there are many other in-person events that might be of interest, and I am probably missing a few. That said, here are some others to keep in mind:

The Bottom Line

2022 looks like the year that in-person tech conferences will be returning, so if you are comfortable traveling and being out and amongst people again this year, you might want to start making your plans to attend your favorite events.

Continue Reading

Cloud

Talk with George DeCandio and Peter Wassel

Published

on

It was one great day of my life when I got to talk with two amazing personalities George DeCandio, Chief Technology Officer – Mainframe Software Division, Broadcom, and Peter Wassel, Director of Product Management & Strategy – Mainframe Software Division at Broadcom. We had this excellent discussion about the latest in Broadcom Mainframe DevOps and how Broadcom is supporting their Mainframe clients to unlock the value of their Mainframe applications with modern tooling and processes, like DevOps, and through open interfaces.

The conversation began with a brief, enthusiastic introduction from both George and Peter. They stated about the opportunities on the Broadcom mainframe from a DevOps perspective, the challenges, some top-notch insights into what is going on within the world inside Broadcom and via their ecosystem of consumers as well as partners.

We later gave a thought on various impacts and potential for opening up the Broadcom mainframe, the available opportunities,  latest news Mainframe DevOps into the prospects in opening up the mainframe – we conversed Broadcom’s open-first message, and why being non-prescriptive and low-opinion is valuable, and why that strategy matters for companies who want to grow.

Following, we dwell into what it refers to be open (SDKs, touch on APIs,  CLIs, IDEs), and turn to Zowe, IDEs like Eclipse Che and Visual Studio Code, George and Peter emphasized more on background on what the Open Mainframe Project does, then took us through Broadcom’s commitment and assistance to it and why it counts and is looking at what off-platform tooling refers, i.e., enterprise DevOps toolchains, etc.

The discourse further shared a comprehensive viewpoint at what Broadcom is doing for Broadcom Mainframe DevOps; from both perspectives be it Business or Technical, George and Peter conveyed their remarkable thoughts from inside the business, as well as key highlights from around the world and what they are accomplishing with consumers to drive thriving results.

A few topics we talked about, cover:

  • Their commitment to IDEs and Zowe via CA Brightside, 
  • Their concentration on Git and CA Endevor Bridge for Git, 
  • The vitality of the Mainframe Developer cockpit, and that in effect in the modern place, everyone becomes a developer.

Another vital topic we conversed on is that Broadcom comprehends that Mainframe DevOps is a journey and needs cultural change, the latest tools, unique practices, etc. That can indeed be an overwhelming change if not done right. We even learn more about their excellent no-fee offerings and why they consider this journey as a partnership and not just a vendor-customer association.

We ended with a quick look at the design thinking workshop Broadcom presents. You can check out the DevOps link below today, as your organization will obtain real value from this offering.

Join in to learn more about the great topics and more.

The podcast we created is in association with the Mainframe division of Broadcom.

To get the details, visit:

http://bit.ly/BroadcomMainframe 

For a free MRI Security Essentials assessment today, visit:

http://bit.ly/broadcommritrial

Learn more about the Mainframe DevOps design thinking workshop here:

https://bit.ly/broadcommainframedevops

Continue Reading

Cloud

Dez Talking with Per Kroll, Senior Director of R&D, Broadcom Mainframe Division

Published

on

I recently met this amazing personality, none other than Per Kroll, Senior Director of R&D, Broadcom Mainframe Division. We have a conversation about  Mainframes, #AI Ops, and connecting Mainframes to the Cloud, the trends and latest news in mainframe, within the Mainframe and corresponding tech and business markets. We also discussed what’s new inside the Mainframe Division of Broadcom, especially the world of Research and Development, which Per heads up.

We started Per Kroll Interview by getting to know Per. He conveys some beautiful, inspiring tales about his early years in life and work covering his academic journey and working career, and what made him get into AI Ops and the Mainframe world.

Later, we talked about vital topics like how the Mainframe crafts a better Hybrid Cloud and the shifts enabling it. Per is the most suitable person to talk about, as he had spent most of his work life on the distributed side. Over the last 6 years, he directed the evolution of AIOps and DevOps solutions for the Broadcom Mainframe – with all of that in mind, Per shares what he notices are the pivotal trends that you have marked on the Mainframe across the last few years.

In the discourse related to Broadcom Mainframe Division, Per also shared his views around Contextual Insight – the capability to make sense of disparate data, and the way the mainframe is a foundational computing platform for the digital economy due to the ability for the mainframe to handle huge workloads aside any disruptions. Per also extends on the role of AIOps, and why its popularity is rising.  

Further, he outlined the journey towards self-healing systems – an incremental path and people are “always selling it”? What’s feasible today VS the path ahead. Per also mentioned why when customers talk about AIOps, they often speak about self-healing systems and his outlook on what is fact vs imagination in the arena of self-healing systems – Per also conveys his take on key measures organisations can take to “get started” on this lane.

We later concluded the discussion with Per’s ideas on what’s coming over the horizon following 3 to 5 years, in particular around the evolution in terms of DevOps, AIOps, Openness has gone so quick over the previous few years, which has made a created between how the average Broadcom mainframe consumer is operating and what is achievable.

Another vital insight of all Per shares is his longing, and what he believes will occur is that the early adopters will showcase what is viable on a large scale, and that will speed up the adoption among the Early Majority and the Late Majority – as he states, in the expressions of Geoffrey Moore, he hopes the great thing happening over following years is that we will cross the chasm for various technologies and solutions we talked about today.

Join for all of these fantastic subjects and more.

The podcast was created in collaboration with the Mainframe division of Broadcom.

To get more details, visit:

http://j.mp/BroadcomMainframe

To get a free MRI Security Essentials assessment visit today:

http://bit.ly/broadcommritrial

Continue Reading

Trending On Elnion

Copyright © 2021 ELNION ONLINE - All rights reserved.