Attending Informatica World presents the opportunity to speak with IT professionals about one of the strongest use cases for job scheduling and workload automation: the end-to-end automation of ETL, data warehousing and business intelligence (BI) processes.
In recent years, the democratization of analytic, reporting and BI solutions has become a driving force in the growing complexity of data integration and data warehousing models. Add to the equation the growing complexity and volume of information thanks to Big Data, and it’s no surprise that the underlying ETL and data warehousing processes to integrate and access data from multiple sources is becoming increasingly complex.
The IT organization is left between a rock and a hard place. On one hand you have the business…the consumers of data…and the concept of “agile BI,” or the ability of IT to more seamlessly and efficiently update data warehousing processes, and thus, subsequent downstream BI and reporting to better meet business demands. On the other hand, the complexity of these underlying data warehousing processes is largely being driven by the increasing numbers of data integration solutions, tools and data sources.
Sessions at last year’s Gartner ITExpo highlighted these issues. According to Gartner, the idea of the single data warehouse model is dead; a federated, heterogeneous collection of data warehouses is the new model and it is forcing IT organizations to streamline the movement of data between multiple repositories in support of real-time analytics. “Data process modeling, mapping and automation will be the key to conquering the challenges associated with Big Data,” said Daryl Plummer at last year’s show.
IT organizations have recognized this and are attempting to automate, but their current approach presents serious limitations. For example, nearly all data warehouse, ETL and BI solutions have native batch scheduling capabilities, but they’re limited in their respective functionality to scheduling on their respective systems. As a result, IT is forced to rely on error-prone and time consuming scripting to pass data and manage dependencies between the various components that comprise the modern data warehousing process.
Unfortunately scripting builds a barrier to garnering the benefits from a concept like “agile BI.” Relying on scripting to manage the extract/warehousing/reporting processes makes it impossible for IT organizations to respond to the requirements of the business, which are increasingly demanding the ability to run reports on-demand or on an inter-day cycle.
Take for example this post I came across from a solutions architect at our partner IBM. It’s a classic example of the sort processes that an IT organization is automating via scripting within a data warehousing environment. In this case, the process includes using Netezza to run an ETL process, upload that data into SAS and run a scoring model. Author Thomas Dinsmore, solutions architect at IBM, sums up the disadvantages of custom coding:
"Custom-coding a scoring model from scratch takes time to design, build, test, validate and tune; customers report cycle times of three to six months to deploy a scoring model into production. Manual coding also introduces a source of error into the process, so that scoring jobs must be exhaustively validated to ensure they produce the same results as the original predictive model."
This recent article on ZDNet by Dana Gardner, president and principal analyst at Interarbor Solutions, further underscores the issues faced by IT organizations that have adopted an “elemental” approach to automating data integration and data warehousing processes. To use Dana’s words, “data dichotomy” is forcing businesses of all sizes to manage increasing volumes of both internally generated and external sources of data in order to identify new customers and drive new revenue. Yet the use of multiple, “point” data integration solutions creates an “agnostic tool chain” that builds a barrier for IT to deliver that data to the end user via reporting and BI.
The idea behind workload automation is to take an “architectural” approach by unifying those “point” integration solutions into a single framework. The result is ease of authorship, control and upkeep, thereby eliminating “a source of error” that Dinsmore speaks too. IT organizations are able to integrate all data pathways into automated, repeatable processes that deliver control and visibility over all steps of the data warehouse/reporting process by intelligently automating and managing the dependencies, constraints and completion of jobs that exist within these workflows. These steps can include everything from lower-level tasks and data center functions, such as scheduling file transfers, database backups and automating database services such as SSIS or SSRS, to providing production-ready job steps for high-end database appliances and analytic solutions such as Informatica, Netezza, Teradata, DataStage and Cognos, SAP BusinessObjects and more.
And the benefit to the business? It gives them the return on Big Data they've been looking for by allowing IT to reduce latency and increase data quality.
Attending Microsoft Management Summit (MMS) presents the opportunity to speak with system admins, DBAs, IT operations, upper management and more about the benefits that taking a unified approach to automation can deliver to datacenter, runbook or IT operational processes.
On the one side you have workload automation, which can trace its roots back to mainframe batch scheduling and has since moved to the forefront as a backbone for most of today’s mission-critical IT projects. It’s more important than ever in automating the underlying IT processes that directly support the business, i.e. passing data and managing dependencies between critical business applications, automating data warehousing/BI processes and more.
On the other side you have IT process automation…aka runbook automation and IT operational processes…that many of the System Center constituency here at MMS is predominately concerned about. Typically IT operations will handle the automation of processes such as provisioning virtual resources, machine configuration, incident management, change and release management functions, etc. with their own, “point” automation tools. The problem is when the departmental boundaries cross and business processes and IT operational processes become co-dependent on each other, taking an “elemental” approach to IT automation with these “point” automation tools builds boundaries to unifying and automating across process types.
This is exactly the reason why today we announced new two-way integration between ActiveBatch and Microsoft’s System Center Orchestrator. As recent conversations with analysts and customers have confirmed, no automation vendor will be able to provide a one-stop automation solution for everything IT. As EMA analyst Torsten Volk recently highlighted in a blog post, while it might be ideal in certain organizations to consolidate all automation disciplines into one centrally governed unit, it’s not always feasible. The onus is on the automation vendors to provide the functional flexibility to integrate if an IT organization so desires.
ActiveBatch’s strengths lie on the workload/business process automation side; Orchestrator on the IT operational side. By taking this architectural approach we’re unifying multiple automation solutions, and in the process, eliminating the need to integrate process types with scripting or manual handoffs and flag waving and increasing productivity by reducing the time spent jumping between disparate automation solutions.
For the IT professionals at this year’s Microsoft Management Summit, we’re clearly seeing this “architectural” approach extend across all elements of their IT organization, both Microsoft and non-Microsoft, process types, scripting languages, applications and more:
- Conversations underscored the need to extend runbook automation beyond the datacenter and IT Operations with workload automation. While many attendees stated that System Center is the dominant solution within the IT operations arena, they’re still facing the challenges associated with managing a heterogeneous IT environment, including the ability to automate Oracle EBS, Dynamics AX, SAP environments and various data warehousing and ETL solutions using a solution such as ActiveBatch.
- Many system administrators expressed interest in the ability to automate what we’d call the “administrative” functions associated with their jobs, such as managing SharePoint, Exchange and Active Directory tasks and processes. For example, the ability to create, modify and manage Active Directory objects automatically based on an IT or business event, such as HR uploading a new-hire form to a network drive, meant being able to more dynamically assign and enforce security policies while saving themselves the manual labor of doing these chores themselves.
- PowerShell is Microsoft’s task automation framework, but the ability to use workload automation as a framework to encapsulate PowerShell scripts resonated with attendees who wanted a central point of control and monitoring for their scripts and who wanted to be able to trigger these scripts based on an IT event, such as a database or network location being updated or an email or FTP being received. And while PowerShell works great, the ability to reduce IT operations’ reliance on scripting and replace it with production-ready job scheduling is a question of productivity and reliability.
- Windows Task Scheduler and SQL Server Agent remains a stalwart automation tool within Microsoft environments, but nearly every attendee identified the static nature of date/time scheduling and the need to supersede it by automating business and IT processes in real-time.
We had the pleasure this week of speaking with a long standing customer for an update about their implementation. Turkiye Finans has been an ActiveBatch user since 2009 and Mucahit Yavuz, IT Operations Manager at Turkiye Finans, has been part of the ActiveBatch project from the beginning.
The conversation underscored a number of points, but most importantly, Turkiye Finans’ implementation of ActiveBatch highlights a strategy we mentioned in our previous post: taking a phased approach where singular processes were identified and automated, then complimentary processes and technologies have been added to the implementation, building an “architectural” IT automation solution.
For Turkiye Finans, it started with a common pain point: a collection of disparate, “point” scheduling tools and scripts, in this case an assortment of PowerShell scripts executed via Windows Task Scheduler and SQL Server Agent, .NET Assemblies and Web Services. This loosely cobbled together collection of tools represented the foundation for the automation of many of the bank’s key business processes, including updating of customer accounts, bill payments and transaction processes.
Defining and executing scripts via Task Scheduler and SQL Server Agent meant scheduling jobs across individual servers. If a script failed to run, finding the machine it was located on was half the battle, and once it was located, security credentials had to be entered each time. Using SQL Server Agent also meant the inability to trigger jobs based on an IT event, such as a file being downloaded or a customer email being received.
The result was an IT operations team that spent more time putting out fires and managing existing processes via handshakes and flag waving then concentrating on automating and optimizing new processes for the benefit of the IT organization and business as a whole.
Since consolidating multiple schedulers, the IT operations team has driven governance and visibility, including receiving an automated alert/incident within System Center Operations Manager when a job fails, as opposed to the business calling the IT Helpdesk when a customer-facing application hasn’t been updated. They’ve also eliminated security loopholes, including removing the need to enter SQL Server security credentials each time by simply associating a specific user with his or her security credentials the first, and therefore, only time.
Less than a year later, the implementation has expanded and now includes the datacenter to manage a rapidly expanding environment of virtual machines, including VMware and Microsoft Hyper-V. The goal was to optimize the execution of the business workflows already executed within ActiveBatch to the provisioning of the underlying resources within the datacenter. Doing so has allowed the IT organization to balance the execution of business workflows versus “the runbook and datacenter house-keeping tasks,” therefore increasing the overall availability of virtual resources. Most importantly, using a single, unified solution has virtually eliminated the failure of a standalone job scheduled via SQL Server Agent or Task Scheduler because the server it resides on was down for maintenance.
There’s much more to this story, including more on Turkiye Finans’ use of System Center Operations Manager and IT organization’s heavy reliance on Web Service and .NET Assemblies…too much to go into length here. To read the complete story, feel free to download the case study.
Recent discussions with customers and industry analysts uncovered a common theme or notion pertaining to the approach organizations are taking, or failing to take, when it comes to implementing an IT automation strategy.
Too often, IT organizations take what we’d call an “elemental” approach to IT automation. They identify a “point” problem and solve it with a “point” solution. For example, a DBA needs to automate database backups or a developer requires automation of repetitive and time-consuming FTP tasks. The result is a “point” solution is implemented, e.g. writing a script, Windows Task Scheduler, Cron, SQL Server Agent, a tool for automating FTPs, etc. This same concept extends to process types as well, such as implementing an automation tool for runbook automation, a job scheduler to fulfill batch processing requirements or a solution for the DBA looking to automate datacenter tasks.
This line of thinking, while sound in the short term, presents IT operational issues for the organization as a whole in the long term. Taking this “elemental” approach means implementing a solution without thinking of cross-departmental automation requirements that are at play. It builds silos of automation that present barriers to the integration of business and IT operational processes; processes that can be dependent on one another. Moreover, these “point” scheduling solutions represent a temporary fix that become outdated or insufficient within a few years and that increases IT complexity and imposes increased costs and resources for maintaining multiple tools moving forward.
The idea is to take an “architectural” approach by adopting a unified, enterprise-wide IT automation strategy and solution. IT environments are growing in complexity while businesses are becoming increasingly dependent on IT-based services for commercial success in today’s 24/7, Internet-driven world. In order to more efficiently automate and manage the critical dependencies between systems and process types requires an automation solution that bridges those boundaries.
Nor does taking an “architectural” approach require an “all-in” strategy whereby the IT organization is consolidating multiple scheduling and automation solutions in one fell swoop. While some of our customers take this approach, the majority adopt a phased strategy whereby a single process or department type is identified, automated, and then complimentary process are brought under the “architectural” umbrella.
A conversation I had with one of our longer standing customers last month underscores this point. ActiveBatch was implemented within the IT department to directly support the business, including automating overnight processes that move data and manage dependencies between critical business applications, such as their ERP and CRM systems, in addition to automating their ETL and business intelligence processes into end-to-end workflows. Six months later, ActiveBatch was expanded to include the datacenter to automate database backups, file and log shipments, renaming of files and more to ensure continuity between the datacenter and IT department. In the end, the company improved productivity of both the IT and datacenter staff by consolidating multiple scheduling solutions and streamlined procedures for their IT operations team by providing a central point of monitoring and alerting when a scheduled job did fail.
As the example highlights, taking an “architectural” approach lays the foundation for a policy-driven automation strategy that drives governance, visibility and control, allowing IT to more quickly respond to the demands of the business when something does break rather than looking for a script running on some disparate server.
And given the right technology and user interface, it can also enable more business-centric access to IT services such as self-service access to management functions like monitoring workload progress, or self-service automation to allow end users to initiate process themselves – all without the need to involve someone from IT operations.
The combined effects from adopting this enterprise-wide strategy can result in significant business improvements – such as agility and improved service levels – while reducing IT operational costs. These benefits not only enhance IT-business alignment, but can directly impact the ability of the business to expand and grow.
Last month Gartner announced the retirement of the Magic Quadrant for Workload Automation. The announcement has created significant buzz amongst various social media channels, such as LinkedIn’s Enterprise Job Scheduling & Workload Automation group. Here’s the opening summary courtesy of the Gartner announcement:
“Since workload automation is becoming part of a wider systematic approach to automation, Gartner is retiring the Magic Quadrant for workload automation. IT operations leaders must evaluate workload automation in the context of broad data center or application and process automation efforts.”
Gartner’s opening comments sums up the whole story. IT automation is undergoing a period of convergence and consolidation, of which workload automation is becoming one key component. IT organizations are managing increasingly complex processes that are codependent on one another and that span technological and departmental types. In light of the current sophistication of IT infrastructure, the time is right for vendors and consumers to look at their automation solutions end to end. Rather than looking at automation projects as discrete initiatives…such as batch processing, runbook automation, application release automation, the datacenter…IT organizations have to step back and understand the big picture impact that implementing silos of automation can have on the IT organization and business as a whole.
We’re seeing this evolutionary shift taking place within the marketplace by IT organizations that have recognized the advantages of consolidating multiple automation tools into a unified solution. Doing so lays the foundation for a policy-driven automation strategy that drives governance, visibility and control, and as a result, improved service levels to the business.
Fluidity is the new mantra in workload automation. Management demands, data sources, SLA requirements, computing infrastructure—everything attached to IT is turning more flexible, more diverse and more nimble in order to satisfy the accelerating pace of business. At the center of it all, job scheduling and workload automation is becoming more important than at any time in the last 30 years. In the past twelve months we’ve witnessed the continued evolution of business IT into something faster, bigger, and more adaptable than we could have imagined even a few short years ago. Workload automation technology is responding with new solutions that will be increasingly visible in 2013 and beyond. These are the key trends we’ve identified that are re-shaping the segment:
A Look Back: Top Workload Automation Trends of 2012
Trend #1: SLA-Driven Workload Management
BI has matured IT by setting business policies and requiring SLAs for workloads and runbook automation. As a result, more advanced monitoring and alerting is being developed, as well as critical path analysis tied to business priority. This will be essential in order to ensure that resources are provisioned quickly for high-priority jobs and workflows.
Trend #2: Size, Sources, Complexities of Data are Exploding
Big Data no longer resides only within the enterprise. The democratization of BI is causing the death of the single data warehousing model; organizations are pulling data from multiple third-party sources to integrate with internal stores. Moreover, we’ve also seen a rise in self-serve data marts that enable BI users to choose the data sets needed for specific tasks. The need to automate the integration and movement of data between these disparate sources is creating what many are calling “agile BI,” or the ability to update integration processes quickly and efficiently.
Trend #3: Reducing IT Operation Costs Through Dynamic Provisioning
Cloud computing is transforming just about every facet of enterprise IT. But perhaps the biggest change is its ability to turn data processing infrastructure into a pay-as-you-go proposition, drawing upon resources of virtually limitless size exactly at the time they’re needed. By allowing workload automation to provision resources on demand, whether internal, virtual or cloud-based, IT organizations can ensure that the capacity curve tracks to the workload demand curve. This has the added benefit of effectively eliminating costs associated with idle assets between workload bursts.
A Look Ahead: Workload Automation Trends to Watch in 2013
Trend #1: Reactive Automation Model Shifts to Predictive
In response to the diverse array of systems and resources both inside and outside the enterprise, workload automation is becoming “intelligent”; that is, it is combining predictive and reactive forms of resource management in order to not only schedule workflows, but also proactively provision, schedule and distribute the necessary internal, virtual and cloud resources on the fly, all based on historical analysis, to ensure SLAs are being met and bottlenecks eliminated. In particular, new “what-if” capabilities are coming that will enable workflow forecasting across different platforms. IT managers will be able to predict when spikes may overload servers or systems upon which workloads are executing, then distribute those workloads across servers prior to the spike. The improvement will help provide better alignment between IT organizations and the businesses they support.
Trend #2: Democratization of BI Driving the Need for Real-Time Data
Because Business Intelligence is now something used by managers at all levels and in all areas of specialization, there is an increasing need for real-time, or near real-time, data updates to support on-demand decision-making. This is putting huge obligations on workload automation solutions to react quickly by reliably executing the complex workflows needed.
Trend #3: Self-Serve Automation
Managers outside the IT department—those with BI needs—will soon be able to choose from a workload automation service catalog to initiate processes or workflows themselves. Intelligent workload automation will move to support this trend, enabling the system to self-provision the necessary resources and then execute automatically.
Not so long ago, many people thought that job scheduling and workload automation solutions were becoming less relevant as mainframe systems gave way to distributed forms of processing. Nothing could be further from the truth. With IT becoming more complex, diverse and real-time than ever before, coordination and efficiency moves to the forefront. Corporate Web sites, especially transactional eCommerce sites, are just one example of the many mission-critical functions governed by IT. With business processes, applications and computing infrastructures intertwined and interdependent inside and outside the enterprise, small glitches can spiral into major outages. Expect workload automation solutions to become a focal point of IT architecture in order to better manage this fluid new era of 24/7 processing.
Attending the Gartner ITExpo this week has presented the opportunity to demonstrate the value of workload automation to a different constituency group – C-level IT executives and upper management.
The conversation is fundamentally different from speaking to the system admins, application architects or developers we typically speak to at other tradeshows we’ve attended, such as Oracle OpenWorld, Informatica World and Microsoft Management Summit. It’s about understanding the business value that workload automation and enterprise job scheduling will bring to an IT organization.
That value represents the continuity, benefits and savings that having a single, enterprise-wide automation solution brings. IT environments have become increasingly distributed and complex and scheduling jobs at the machine level has become impractical and error-prone. Application and platform-specific scheduling solutions…Windows Task Scheduler, Cron, scheduling tools for virtual and cloud-based platforms or for applications like Informatica, SAP, Oracle and more…. place an operational burden and expense on an IT organization that limits its ability to easily and quickly automate processes that span today’s heterogeneous IT environments.
These “closed” scheduling solutions present a barrier to integrating business and IT operational workflows to streamline IT operations and increase flexibility and support for the business. Moreover, the inability to automate the management of resources and systems, both on-premise and cloud-based, to ensure that workflows are successfully executed and resources are “intelligently” provisioned and de-provisioned adds additional costs. Rather than provisioning server resources for peak demand – and paying the price for underutilization during low workload demands – virtual and cloud computing, combined with workload automation, provides the ability to match workload execution with resource capacity in a “pay as you consume” model. That translates into hard dollars for an IT organization.
To overcome these boundaries, an enterprise IT automation solution can drive efficiency and reduce the cost of operations by providing IT organizations with a single scheduling and automation platform that spans both business and IT operational processes. Workload automation provides a single framework to bring all of your IT business process and IT operational automation requirements into a unified framework, including job scheduling, workload and runbook automation, self service automation and others to reduce the cost of IT operations, improve IT service levels and improve business agility and flexibility.
Oracle OpenWorld 2012 was Advanced Systems Concepts first time exhibiting at this long-standing industry event and the first time many Oracle attendees had the opportunity to view ActiveBatch up close. We spoke with a diverse range of IT professionals, ranging from DBAs to application architects to developers. But if there was a common theme amongst everybody, they saw the value in a more robust automation solution, whether they were looking to automate strictly Oracle processes, or integrate and automate those same processes across other, non-Oracle applications and technologies.
- Lots of interesting conversations with application architects and business developers regarding Oracle E-Business Suite (EBS) and PeopleSoft. Many of these attendees are using Concurrent Manager and Process Scheduler to schedule and execute processes within these respective applications, but instantly see the value in having an enterprise workload automation platform that adds more advanced date/time scheduling, and more importantly, an event automation framework to allow them to trigger processes based on common IT events, such as an email, an FTP, an Oracle database trigger, or others.
- Like most enterprise applications, Oracle EBS’ and PeopleSoft’s native schedulers are simply limited to scheduling only EBS or PeopleSoft processes. Oracle attendees, looking to pass data and manage dependencies between Oracle applications and other technologies, liked the idea of a “single point of control” through which to build and manage these workflows. Other applications attendees commonly mentioned included JD Edwards, Hyperion, Informatica and SAP.
- The ability to extend the “reach” of a workload automation solution by supporting technologies such as WCF LOB adapters, Web Services and Oracle Stored Procedures was a popular topic with attendees. Today, IT operational and business processes span everything from legacy applications to newer, standard-based systems built upon SOA or Web Services. The ability to integrate these application processes within end-to-end workflows was a common talking point, and they liked ActiveBatch’s ability to be able to “consume” a Web Service, LOB Adapter or Stored Procedure and manage their execution within ActiveBatch workflows, all without he need to use custom scripting.
- For example, a number of attendees liked the fact that ActiveBatch could call upon an Oracle Stored Procedure and map those functions as reusable job steps within the Integrated Jobs Library; all by simply specifying a database connection. Moreover, the ability to use output data passed back from that called method as an execution variable for downstream job steps was equally well received.
- Lastly, just because we were at an Oracle show didn’t mean we didn’t hear lots about Microsoft, particularly around IT administrative-type processes. The ability for system administrators to automate many of the administrative processes that consume their time, such as the creation of SharePoint Users and Groups or administer Active Directory Objects, Users and Groups resonated with many.
Cloud computing is transforming just about every facet of enterprise IT. Yet one of the stalwarts of IT—workload automation—may actually hold the key to making the most of this revolutionary innovation.
While the cloud offers the one-two combination of limitless resources and pay-as-you-go pricing, its cost-efficiencies are still governed by an IT organization’s internal guesswork. Plan for too many resources and the economic value can be diminished or lost. Anticipate too few, and performance (increasingly defined by SLAs) will suffer—perhaps precipitously.
Moreover, resource requirements are a moving target. What might be needed this month, this day, this hour or even this minute may vary depending on unexpected changes in processing needs. Such external resource decisions must be made based on the amount of internal resources that happen to be available at that particular time.
It would seem that workload automation applications would be an ideal solution to this important problem. After all, these platforms have evolved in recent years and now are designed to analyze, assemble and monitor the exact amount of limited computing resources needed to simultaneously execute tens of thousands of computing jobs within the enterprise.
The problem is, most conventional workload automation solutions rely on a reactive model for decision-making. Their specialty is managing a finite number of computing resources to meet the needs of discrete, individual tasks occurring on a schedule or under well-defined dependencies. To effectively leverage a system in which storage, processing power and other resources are without limit, a decision engine must be in place that can accurately predict the capacity needed.
To accomplish this, workload automation must have not only historical information necessary to plan for sufficient cloud resources, but also the analytical power to decide exactly how many and when those resources will be needed on a just-in-time basis. This is where the advent of a powerful new concept—intelligent automation—will determine the ultimate value of computing in the virtual/cloud age.
Intelligent automation, comprised of both predictive and reactive forms of resource management, provisioning and scheduling, can transform the use of cloud assets. By employing two internal databases—one transactional, the other analytical—intelligent automation can predict the computing capability needed at a given moment.
Think of it this way. A fast-food manager, operating under a reactive model, might wait until a noontime lunch crowd buys up all of his hamburgers before grilling more—say at 12:10 pm. Predictive management, by contrast, based on a knowledge of past lunch hour demands, would require that the grill start cooking more burgers at 11:45 am, anticipating the rush and then adjusting during the rush as needed.
Intelligent workload automation platforms with the ability to provision internal, distributed and cloud resources on-the-fly and in real time are just now entering the market. With SLAs becoming the focus of many enterprise’s IT management strategies, it’s clear that efficient resource planning and utilization are core issues in the cloud computing era. Intelligent workload automation, effectively integrated into the enterprise, can address and resolve these thorny challenges.
The Royal Bank of Scotland’s (RBS) IT failure was one of the most highly publicized IT failures this year, underscoring the increasing dependence that financial service institutions, and other industries, are placing on their respective IT organizations. Some, including the bank’s CEO, have blamed the failure on the need to dedicate more resources to maintaining and updating the legacy systems and applications that so many of the bank’s critical processes have become dependent on.
What ultimately triggered the problem may end up being largely irrelevant, as no system is 100% foolproof. But what the RBS IT failure highlights is that with IT dependence comes risk. In order to mitigate that risk, IT organizations must earn to more quickly incorporate current and rapidly developing business challenges into the IT infrastructure.
What do I mean by “rapidly developing business challenges?” The consumerization of technology means consumers are continually looking to access a businesses’ services online more and more, adding complexity to IT environments and adding more pain to the IT management headache. This is difficult not just because of the increasing complexity of IT environments but also the difficulty associated in updating, maintaining and managing the complex dependencies between these various systems.
For example, a bank teller making a one-off mistake while interacting with a single customer is one thing, but incorrectly managing the complex interdependencies can have a domino effect that can affect thousands is another.
Adding to this is the speed at which IT systems are expected to operate. Consumers now operate in a near real-time environment, and this creates pressure that IT departments are struggling to keep pace with.
So if the RBS IT failure shows anything, what is needed is more effective management of systems, management that combines intelligent software platforms with intelligent people. In the end, the huge volumes of updates and complex interdependencies across heterogeneous environments is making IT automation a necessity.
But not just any automation, more of what I’d call “intelligent” IT automation. Workload and IT process automation cannot be effective unless the solution is able to recognize that an error has occurred, where it has occurred and what the impact will be. Compliance and control, alerting, error handling, they’re all critical components in allowing IT to effectively govern and audit the automation of processes to drive speed and efficiency, all the while preventing errors like the one RBS experienced from happening.