Apr 1 - ATM networks have changed very little since their inception. IBM’s withdrawal of support for OS/2 - the long-reigning preferred operating system for ATMs - at the turn of the 21st century, instigated a period of sustained change. According to Retail Banking Research, OS/2 usage is decreasing by around 8 to10 percent a year, and will be phased out over the next four to five years. Although global adoption rates vary according to analyst research – today, Retail Banking Research estimates 64 percent in Western Europe and Dove Consulting projects 63 percent by 2008 in the U.S. – Windows has become the ATM operating system of choice.
Industry leaders recognize that Windows heralds the move away from proprietary to open standards. As hardware and software become decoupled, deployers are no longer locked into the proprietary software that accompanies an ATM terminal. This has changed the business models of ATM deployers as they have adopted a multi-vendor strategy to benefit from the more competitive pricing structure created by the Windows environment. However, a number of areas have been overlooked when considering this migration, notably how ATM terminals are remotely monitored and controlled in the new environment. The outcome is that banks and payments processors are experiencing greater downtime at their ATM networks in the Windows world.
The move from OS/2’s stable ATM environment to the complexity of Windows with its regular security updates to the operating system, coupled with mismatched release cycles on the numerous other applications now resident on the ATM, impacts the ATM channel significantly. With new levels of risk introduced into the networks, ATM deployers need to monitor and control all elements more closely to ensure a profitable and reliable ATM network in the 21st century. This guide has been designed to help ATM network deployers understand the business value of advanced ATM monitoring in a Windows environment.
Key pressures affecting the maintenance and control of ATM networks
With more than 1.5 million terminals in service worldwide, financial institutions recognize that the ATM is a key customer touchpoint and have attached increased value to it beyond simple cash dispensing. Intensified economic and business forces have placed extraordinary pressure on banks to remain competitive, increase customer loyalty, and improve the efficiency and profitability of their ATM networks. These pressures are compounded by the fact that, given ageing legacy systems and rapid consolidation in recent years, banks now manage an increasingly complex mixture of vertically siloed technologies.
To get the most from their ATM channel, business owners must look to maximize the investment they have made in hardware and infrastructure as part of their migration to a Windows/open standards-based network and beyond. This has to be done alongside the ongoing challenge of addressing key pressures that affect the maintenance and control of ATM networks as well as minimizing the ongoing cost of ownership of the network. Outlined below are some of the key pressures that business owners face that affect the maintenance and control of their ATM networks.
The time factor
Banks are under more pressure than ever to prove the profitability of the ATM network by extending their core ATM capabilities but are still struggling with other factors such as reducing the cost of ATM network maintenance and maximizing availability. While ideally any fault at an ATM terminal should be raised as soon as it occurs in order to be promptly fixed, the reality is that there is often a significant time-lag from when a fault occurs to when the network operator is made aware of it or identifies it - in some cases, hours or even days.
The current approach largely relies on listening to messages at the host and trying to interpret or second guess what is happening at an ATM. For example, it may be several hours before a fraud detection platform senses that an ATM terminal has not performed transactions over an active period and then may raise an alert for an investigation at the ATM - Meanwhile the terminal has suffered a software fault unbeknown to the host, and customers have been unable to perform any transactions.
The data available to network deployers about the state of ATM machines lacks granularity so they do not know the specific nature of a fault at the ATM. This leads to longer delays in routing the problem to the correct department for investigation and resolution. The usual response is that ATM deployers send engineers to attend out-of-service ATMs without knowing the nature of the problem, which could be solved by a simple software restart. In the meantime, deployers suffer from the associated cost of network downtime, not only in terms of lost interchange revenues but also in failed customer interactions.
ATM downtime creates brand risk with both customers and within the highly competitive banking industry as a whole. Customers have come to expect ATMs to be available 24/7 and to provide a high quality, stable service. In fact, widespread ATM downtime has achieved national newspaper coverage with the associated damage that causes to the bank’s reputation. Low network availability and bad service can also be an embarrassing prospect for financial institutions when banking industry peers are presented with, for example, a monthly account of their country’s ATM network statistics.
As such, quality is particularly important for deployers when introducing new functionality at the ATM. However, it is difficult to evaluate how successful the uptake has been or what errors have occurred after deployment without closer monitoring of the network. Accurate monitoring post-release helps to gauge the success of each new software release. As banks embrace the value of customer loyalty and retention as opposed to the historic focus on winning new customers, it is essential that there is a consistently high quality of interaction across all banking channels.
A survey conducted by ICM research on behalf of Level Four in July 2007 indicated that 38 percent of respondents (UK cardholders) would consider moving their main bank account if their bank’s ATMs were constantly out of service or unable to dispense cash.
An area of ATM operations that often causes overhead in time and cost is journaling – the process that gives the deployer a record of the exact sequence of events occurring at the ATM. However, for many, journaling is still largely paper-based, which involves a physical check on the journal roll inside the ATM to resolve any discrepancies. This approach results in lengthy delays in complaint handling that customers will not tolerate. Due to the high overhead of investigation, in some instances claims are not contested and revenue is lost. What is required is a centrally held journal database derived from device level logs, accessible from a customer contact centre to reduce operational cost and drive out inefficiency.
Dynamic inventory management
Another issue faced by ATM deployers is trying to understand the exact status of the different hardware devices and software versions across their ATM network, especially in view of the number of third parties constantly replacing parts through servicing ATMs, and also different service arrangements they may be managing in a multi-vendor environment. Deployers require a dynamic inventory database of all peripherals and software versions at individual ATM terminals that they can query at any time and receive real-time responses.
The information would then be kept in a central repository where the deployer would have a dynamic and accurate picture of its ATM estate. This approach would enable it to route service calls with certainty (for example, to ensure the correct part to fix an ‘out of order’ ATM is sent the first time) and conduct software updates only on machines requiring updating.
Consider also the example of a bank looking to deploy ATM advertising, or other enhanced content on their terminals. This type of graphical/animated content requires the correct formatting in order to display correctly and so the deployer needs to have accurate data about each terminal’s specific capabilities, e.g. screen resolution and graphics cards in order to display properly. Many banks have lost track of the capabilities of the ATMs they own, making a project such as advertising extremely daunting without an agent-based monitoring approach.
Remote diagnosis of software faults
Software problems are the biggest causes of ATM failures. To prevent long periods of network downtime and the associated loss of revenue, ATM deployers need the ability to perform ‘keyhole surgery’ in order to stop and restart Windows processes at individual ATM machines.
Measurement and reporting of ATM uptime
In the OS/2 world, ATM availability was measured at the host and reported from the IT area within a bank to the business area. This was a valid approach for old communication protocols (like SNA) with a real point-to-point connection between the host and the ATM. Today, most banks use TCP/IP as a communications protocol, which does not use a dedicated host connection and so ATM availability is decoupled from network availability. As the majority of today’s host systems are based on fault-tolerant computing and well-established software, ATM network availability was high (often in excess of 99 percent). This is because the OS/2 operating system and application software were deemed very reliable, hence it was taken as a given that if the host was available for processing transactions, the ATM was available and in service.
However, in a Windows network, this approach is no longer valid because true ATM service availability can only be measured at the terminal. This requires constant reporting from the ATM terminal itself because transaction processing cannot happen by definition if an ATM terminal is not in use - even if the host is available. For a true and fair representation of ATM network availability, periodic ’heartbeat’ reporting to a central server is required to report on whether the ATM terminal is online and operational and able to accept customer transactions.
Greater visibility and transparency of the ATM network information to both IT and business departments will ensure and help to justify the closer alignment between them. They will also improve the quality and accessibility of true network statistics and information for management.
Security and risk management
Security is an issue high on the list of financial institutions’ pressures. ATM networks are still subject to fraudulent attacks, even more so since the introduction of chip-enabled cards in much of the world that has forced fraud to migrate away from the Point of Sale. While there are a number of approaches to counter ATM fraud, closer monitoring of customer interactions at the ATM through video or photographic snapshots would provide ATM owners with a greater level of security.
Within the networks themselves, security is an issue. There is an emerging trend to grant third-party service agents access to a limited amount of diagnostic information about the network directly. While this offers obvious benefits, it also introduces a level of risk around network security that needs to be considered.
Loss of interchange revenue
Network downtime is a worst-case scenario for deployers, especially for processors who manage multiple ATM networks, as it is costly in terms of lost revenue and customer loyalty. If a deployer experiences significant downtime in their ATM network, they lose revenue from interchange fees for several reasons. They cannot attract interchange revenue by offering ATM services to existing and prospective customers. Additionally, existing customers will seek alternative ATMs with the likely scenario being that they will go to a competitor’s ATM, and deployers will be charged more expensive interchange fees. This scenario compounds the problem of network downtime as well as the negative impact downtime has on brand and reputation with customers.
Exploiting the potential for multi-vendor networks
Finally, with M&A activity and consolidation within the banking industry combined with the proliferation of open standards brought on by the move to Windows, ATM deployers face the challenge of capitalizing on the potential of multi-vendor networks. Having hardware from multiple vendors within a network introduces a new level of risk. It also impacts deployers’ operations, as they have to reassess how to manage their ATM network, especially in terms of terminal monitoring and ATM servicing. Instead, ATM deployers need to drive down costs to reap the benefits of open standards and increased competition between vendors for the hardware business. Banks need to carefully consider their choice of ATM software, monitoring software and third party service provision by viewing each as a separate business case, if they are to benefit most from this opportunity to avoid any vendor lock-in strategies re-emerging.
In conclusion, the combination of external and internal pressures requires banks to gain greater insight into, and better understanding of, what the ATM network is doing for their business. The intelligence gained provides a competitive advantage for banks, improving their overall business by providing visibility to never –before seen trends, patterns and statistics. With more detailed knowledge of their ATM network’s performance, network traffic and quality of service, banks can improve the profitability of their networks and better address customer requirements and needs.
Level Four has written a Guide designed to help banks and processors understand the business value of advanced ATM monitoring in a Windows environment. The Guide outlines the business and technical issues to consider when wanting to extract greater intelligence from an ATM network and put more intelligence back into the network. To download a full copy, please visit www.levelfour.com.