Data Quality vs Data Quantity: What’s More Important for AI?

Artificial Mind

Business are enjoying a data revolution

The sheer volume of data being collected from Internet of Things devices has skyrocketed in recent years, and sophisticated artificial intelligence and machine learning tools are able to extract valuable information that would otherwise go unnoticed. However, this glut of data has raised a critical question: How important is the quality of the data being collected?

Artificial intelligence and machine learning can provide remarkable insight. However, AI can’t distinguish between good data and bad data on its own, and the algorithms powering AI can only assume the data being analyzed is reliable. Bad data, at best, will produce results that aren’t actionable or insightful. But there’s an even bigger concern: Bad data can lead to results that are misleading. In addition to the time and money wasted analyzing bad data, AI systems can encourage a company to take steps that are even more wasteful.

Martha Bennett at Forrester believes deriving meaningful insights from data is key to staying competitive.

One concern that often arises in statistics is erroneous signals. A small bias in a sensor, for example, can cause AI systems to see an effect that isn’t real. The likelihood of a system picking up on an errant signal rises with the volume of data collected; a tiny bias in a sample is far more likely to be noticed by AI when using the volume of data common with today’s machine learning systems. Even data of reasonably high quality can lead to erroneous results, potentially leading companies down an unproductive path. This is part of the reason why data scientists are in such high demand. Their ability to implement the right algorithms is clearly important, but it also takes human judgment to make sense of the results AI systems produce. Determining whether a signal is a real effect can be a challenging task.

The power of machine learning is largely due to its ability to learn on its own. In order to get started, however, ML systems need to be trained with a set of data, and this data set needs to be of especially high quality, as even small problems can spoil the algorithms from the beginning. ML often works best when it’s left alone; tweaking the results manually can introduce bias and other problems. However, it’s important to carefully note how the ML system was trained and what data set was used. If problems arise later on, being able to examine the original data can be essential.

Relying on AI is important for a growing number of businesses. However, it can be tempting to use AI when it’s not the appropriate choice. In some situations, there simply isn’t enough high-quality data for systems to analyze, yet people often feel tempted to use AI systems anyway. Before launching an AI project, it’s important to examine the data itself and determine if quality results are even possible. AI systems all have their limitations, and none are able to make up for a lack of good data to analyze. Again, human expertise is essential. Data scientists and other statistics experts know how to examine data and find out what type of analysis is appropriate.

In general, more data leads to better results. Eventually, however, there comes a point where no additional data is needed as the data set is already broad enough to get the most out of AI and ML systems. It can be easy to fail to recognize when there’s no need to gather additional data due to the low cost of data storage and processing power. Over time, however, costs can creep up and eventually become less sustainable. This problem is also exacerbated by cloud storage, which makes acquiring storage space only a few clicks away. Before feeding more data into AI and ML systems, organizations should take time to determine all of the associated costs and ask whether doing so is worthwhile. If AI and ML systems are already fully saturated with data, it may make more sense to cut back instead of expand.

Data is driving today’s tech fields, and there’s no sign of this trend slowing down in the near future. However, it’s important to use the right tools when analyzing data to make the most of it, as misusing data can be wasteful or even dangerous. Before feeding more and more data to AI and ML systems, take some time to determine if there are ways to improve overall quality. A bit of data quality improvement can go a long way toward making the most of AI and ML systems.

Study Suggests Companies Reaping Benefits of Combining AR and IIoT

iot smart industry 4.0 concept. Industrial engineer(blurred) using smart glasses with augmented mixed virtual reality technology to read the data that  how to fix or maintenance the machine in factory

A new study offers fascinating insights into how companies are crafting industrial IoT and augmented reality experiences

Published by PTC, the “State of Industrial Innovation” study represents an ongoing analysis from the ThingWorx maker which explores market and adoption trends in industrial internet of things and mixed reality – two fields becoming more robust and complex as they evolve and intertwine.

AR adoption is increasing at a rapid pace. With enterprises in the midst of digital transformation, those looking to keep up cannot afford to delay adopting AR technology. Enterprises need to determine the business case across a wide range of potential AR applications: Customers are expecting better experiences when dealing with enterprises, and AR can play a crucial role in providing innovative services, solutions, and products. AR used internally is crucial as well, as it allows employees to be more productive and provide better interactions with customers.

Study Suggests Companies Reaping Benefits of Combining AR and IIoT TechNative

PTC’s report aims to focus on strategic differentiation; that is, how effectively companies are using AR to differentiate themselves from their competitors. The report shows that companies are reaping the benefits of AR technologies, often experiencing a return on their investment within a year. This high pace of adoption, PTC notes, presents tremendous opportunity but also the potential for disruption across entire industries.

Remote Monitor and Maintenance

IoT’s potential use cases are broad, but for industrial IoT, AR’s most beneficial use case may be remote monitoring and maintenance. We recently spoke to PTC’s Chirag Mehta to hear how BAE Systems are using augmented reality to help train new staff.

Usage differs between small and large companies, with smaller companies aiming to increase their market share by offering unique products and services to stand out from their competitors. Large companies, by contrast, typically have substantial interior service capabilities, and their industrial IoT focus lies more on increasing their productivity and efficiency. Mike Campbell, EVP for AR at PTC believes “the window to leverage AR to differentiate is limited”.

In compiling this report, we observed that pilots start with internal proofs of concept and quickly become deployed across multiple areas, including customer-facing product and service initiatives. Enterprises and consumers alike are on the verge of truly experiencing the transformative power of augmented reality.

The power of remote monitoring is clear to enterprises, and PTC found that they’re often implementing IoT technology in parallel with other technologies. Remote monitoring can serve as a foundation for innovative ideas. This provides a layer upon which many enterprises are building machine learning and predictive analytics. One of the most powerful features of this new technology is being able to predict failures ahead of time, helping customers best use their products. Manufacturers can also use this technology to find service opportunities that would go unnoticed internally or by third-parties.

Study Suggests Companies Reaping Benefits of Combining AR and IIoT TechNative

Varian stands as an excellent example of how industrial IoT can transform companies. The manufacturer of radiation oncology equipment was able to reduce the cost of their service by using IoT technology to improve the uptime of their machines. Predictive maintenance, in particular, has proven to be beneficial; Varian was able to reduce the frequency of service trips to their machines by 42 percent, saving a considerable amount of labor costs and reducing inconvenience to customers.

Mixed Reality and the Future of Healthcare

Female Doctor with futuristic hud screen tablet.  Bacteria, virus, microbe. Medical concept of the future

The healthcare field is constantly being changed by new drugs, new studies, and new therapies

However, the field often lags when it comes to adopting new technology, and even making the seemingly straightforward move to electronic records has proven to be a lengthy process. Still, new technology not created exclusively for medicine is coming, and mixed reality devices in particular are becoming a reality for many medical professionals and healthcare centers.

Mixed reality combines virtual reality elements with human vision. Head-mounted devices use clear screens to give users an unobstructed view, but various technologies can be used to project images onto the screen. For doctors, MR provides a means of viewing images and data far more convenient than charts or screens. Furthermore, MR can provide new ways of interacting with patients by projecting information onto medical charts or even directly on the patient. As medical schools and other organizations continue to explore MR, experts will devise novel uses for MR technology.

Among all the MR devices coming to market, the one that’s garnered the most attention is the HoloLens from Microsoft. The head-mounted device is more bulky than Google Glass but it offers far greater capabilities by using holograms to create realistic images. HoloLens is also more aware of what the user is seeing, and this greater flexibility provides a host of new use cases traditional AR technology can’t match. HoloLens headsets aren’t cheap, as they currently have a price tag above $3,000, but their cost is relatively low compared to many common medical devices and no doubt cheaper headsets will come to the market in the coming years.

Having an intimate knowledge of human anatomy is crucial for medical students. While charts and interactive computer programs can be valuable tools, medical students often work with cadavers. With MR, students can receive a similarly detailed experience at any time. Furthermore, MR technology can let students zoom in on particular segments, providing a way to explore that’s impractical with a cadaver. Already, medical schools are looking to turn to MR as a primary means of educating future doctors.

Instant Access to Information

Hospitals and medical clinics are becoming more connected, and sensors are ubiquitous in modern medicine. However, doctors still often rely on older technologies when interacting with patients, and many end up reading paper charts to get an overview of a patient’s condition. MR headsets can detect patients and instantly providing relevant medical information to doctors, saving time during interactions and allowing doctors to more quickly respond to emergencies. Simply being able to see a patient’s vital signs without having to read screens or pull out paperwork can save valuable time and allow for more convenient patient interactions.

We recently spoke to Sirko Pelzl,  the CEO at apoQlar, creators of an MRI rendering app for HoloLens

For budding surgeons, being able to view a surgery is essential, as no amount of reading or studying can replace seeing surgeons in action. However, finding time to observe a surgery in progress can be difficult, as space is limited. With MR technology, surgeons can stream their actions live, greatly expanding their audience. Furthermore, surgeries can be recorded routinely, with surgeons saving those that were noteworthy in some way. Surgeons share their techniques with each other, and their experience helps hone the art. With routine recording, surgeons will be better able to collaborate and develop new techniques.

Better Imaging

The benefits of MR extend into offices. Professionals often view medical scans on computer screens and use a mouse and keyboard to manipulate the image and zoom in on certain areas. MR technology can track where a user is looking and respond to gestures, providing a more natural way to analyze an image. Furthermore, many modern imaging processes create 3D images. Through MR, users can visualize depth in a seamless manner. Even more mundane tasks can be aided by MR technology. Loading and modifying electronic medical records can be a somewhat cumbersome process, but new means of interacting enabled by MR can save time.

Mixed Reality and the Future of Healthcare TechNative

CT scans are often a significant source of distress for patients, as the noise and enclosed nature of machines can lead to claustrophobia. Through MR and other technologies, medical experts can provide a simulation to help patients know what to expect. Furthermore, MR, along with AR and VR, can be used to help patients relax or distract themselves while being scanned. Simple being able to watch a movie or play a simple game can help patients pass the time and remain still while lengthy scans are underway. These benefits can improve overall medical treatment, as patients sometimes skip medical sessions and may decline helpful tests or therapies due to discomfort. Improving compliance is a powerful tool for improving patient outcomes.

Streamlining Care

Receptionists, nurses, doctors, and other professionals need to coordinate with each other in hospitals and clinics. Working as a team can be a challenge, and professionals often rely on multiple devices for communication and recalling charts and other data. By standardizing on mixed reality devices, health centers can provide a seamless means of communication and ensure everyone can send and receive notes instantly. Furthermore, MR devices can record and share voice communication, making it quicker and easier to send voice notes that can be heard between visits to patients. With MR technologies, health centers can allow healthcare professionals to spend more time with patients.

Many modern VR and mixed reality devices have a battery life of approximately three to four hours, so doctors would likely need to swap batteries or devices during long shifts. However, this problem will no doubt improve significantly in the future, and the technology will become far cheaper over time. Regardless of the limitations, however, MR is already is use around the world for a range of medical tasks. As medical professionals become more familiar with the technology, patients can expect to see headsets in hospitals and clinics on a regular basis in the near future.

Summit: The World’s Fastest Supercomputer

Summit ONL

The battle for the world’s fastest supercomputer has a new victor: Summit

When rankings were unveiled in June of 2018, Summit ascended to the top of the list, displacing China’s Sunway TaihuLight.

According to IBM, Summit is able to achieve 200 petaflops of performance, or 200 quadrillion calculations per second. This power marks a significant gain on Sunway TaihuLight, which performs a still-staggering 87 petaflops. Summit holds more than 10 petabytes of RAM, and its funding came as part of a $325 million program funded by the United States Department of Energy. Each of Summit’s 4,608 nodes hold two IBM Power9 chips that run at 3.1 GHz. As with many new supercomputers, graphical processing units are also part of the design: Each node has six Nvidia Tesla V100 graphics chips, which help perform a small subset of calculations far faster than a traditional CPU can and perform especially well at many artificial intelligence tasks. The logistics of handling such a large installation are complex, as Summit sits on an eighth of an acre and requires 4,000 gallons of water to be pumped through every minute to prevent overheating. Summit’s software is built on Red Hat Enterprise Linux 7.4.

Located at Oak Ridge, Tennessee, Summit will be used for a range of tasks, including exploring new types of materials through simulations, attempting to uncover links between genetics and cancer, simulating fusion energy in an attempt to make it a feasible means of generating energy, and even simulating the universe to solve remaining astrophysical mysteries. Summit was designed to cover 30 different types of applications, making it a reasonably general-purpose supercomputer that will remain useful even when it’s eventually surpassed by future supercomputers.

A Boon to Science

A natural question that arises with supercomputers is whether they’re worth such large investments. After all, the internet makes it trivial to connect common computing equipment and create ad hoc systems capable of performing massive calculations. For some tasks, however, supercomputers provide capabilities other means of performing computational calculations cannot match, especially in the field of simulations. Weather simulations, for example, are notoriously complex, and much for what we know about weather today is the result of work done of massive supercomputers spanning multiple decades. Summit expands our capabilities, opening up new avenues of scientific exploration and potentially leading to breakthroughs in scientific fields.

In 2001, China had no devices that would meet the typical description of a supercomputer. Today, China dominates the Top500 list of supercomputers and is home to 202 of the top 500 devices, which moved the country ahead of the United States in November of 2017. China’s supercomputer aspirations are in line with some of China’s other goals; China want bragging rights about supercomputing power, but China also wants to be the world’s leader in artificial intelligence, including machine learning and other growing technologies, and supercomputer prowess will aid them in this mission.

The appropriately named Summit does give the United States bragging rights for now, as having the fastest supercomputer is a source of pride and demonstrates the ability of the US to lead in technology. However, in terms of total supercomputing power, China is still on top, and it seems it’s only a matter of time before Summit is dethroned.

Microsoft’s AI Roadmap


Digital transformation is in full effect, and giants of the tech industry are investing heavily in new technologies

Due to its nearly limitless potential, artificial intelligence is at the forefront of much of this research, and Microsoft has been making headlines with new technologies, major acquisitions, and innovative ideas. The tech giant has long been moving toward a cloud-based future, and investment in AI is helping solidify its path toward becoming the AI leader in a number of fields. Here are a few technologies Microsoft has invested in recently and the potential impact they’ll have on the company’s future and society as a whole.

Project Brainwave

Traditional computer hardware can perform complex tasks quickly. However, most hardware is tuned primarily for general-purpose performance, and systems that demand effective real-time performance often rely on specialized hardware, as milliseconds saved can be critical in certain scenarios. Microsoft’s Project Brainwave relies on a type of chips known as a field-programmable gate array. FPGA chips can be modified using software, enabling excellent flexibility for honing the chip toward specific applications and making real-time speeds more easily attainable. Integrated into Azure Machine Learning, Project Brainwave will use optical technology to mark potentially defective products, allowing companies to eliminate much of the labor required in quality assurance and allowing employees to focus on the subset of products that are too complex to test with computers alone. Kap Sharma from Hewlett Packard Enterprise recently spoke to us about Project Brainwave.

Better Accessibility

According to the World Bank, more than one billion people around the globe have a disability. For some, disability can have a dramatic effect on being able to use a computer, and, as we all know, computer access is effectively mandatory for a growing portion of the population. With only one in ten people with a disability having access to assistive technologies and products, according to Microsoft’s chief legal officer and president, Brad Smith, the tech field can certainly do better. Microsoft pledged $250 over the next five years to develop better accessibility capabilities, and it’s leaning on AI to better meet the diverse needs of users. Many of these technologies go far beyond what is currently available today: the Seeing AI app is designed to narrate what a user’s phone is seeing, giving it far greater capabilities than screen-readers of the past, and the Helpicto app is designed to help people who have autism better make use of technology.

Transforming Healthcare

Launched in 2017, Microsoft’s Healthcare NExT initiative aims to close the gap between current healthcare systems and the promise of AI and cloud computing, and it’s taking a far-reaching approach. Doctors spend a considerable amount of time taking notes while talking with patients and in between appointments; Project Empower MD, which is being developed in conjunction with UPMC, will listen to what doctors say and observe what they do to help automate certain tasks. Microsoft Genomics will empower medical professionals to tap into the power of Microsoft Azure to perform genetic processing tasks, enabling new types of treatment. Microsoft is also working to make compliance with HIPPA and other regulations simpler, which helps medical professions spend their time improving care for patients and knowing that patient data is properly and legally secured.

Dynamics 365 AI Solutions

Microsoft is working with Accenture to combine Microsoft’s AI Solution for Care with Accenture’s Intelligent Customer Engagement framework to help digital assistants provide better understanding and responsiveness for varying customer needs. Furthermore, Microsoft and Accenture will be working to incorporate Accenture Intelligent Revenue Growth technology into Microsoft’s AI technology, with the goal of using machine learning to helps sales professionals. The goal is to increase the speed by which sales departments can improve research and deployment by a factor of ten, helping companies stay competitive in crowded markets and find use cases more traditional options would otherwise miss out on. Vertically integrated solutions hold tremendous promise, and this partnership between two experienced companies can prove to be beneficial.

Jonathan Fletcher, CTO at insurance giant Hiscox recently spoke to us about their use of the AI tools available with Microsoft Azure.

Human-Computer Interaction

Microsoft’s Cortana voice assistance has proven popular, but it’s not the only interactive tool the company is working on. In China, the sophisticated Xiaoice chatbot has made tremendous strides in recent years, and, with more than 16 channels available on WeChat and other messaging services, the bot has more than 500 million friends across these networks. Microsoft CEO Satya Nadella described the bot as a “bit of a celebrity” in China. Much like Google’s controversial Duplex, Xiaoice also has voice features, but it calls users upon request instead of placing calls on behalf of users. The polished and bright voice it produces has received praise, partially for its artificially bright intonation that isn’t designed to fool people into thinking they’re talking with an actual person while still providing a human-like experience. Microsoft hasn’t yet announced when Xiaoice will come to other regions, but it’s a safe bet the technology will make a splash once it arrives.

Credit scoring platform TransUnion, formerly Call Credit, are working with Microsoft to harness AI technology to improve risk and fight fraud.

Cortana Improvements

Xiaoice’s arrival won’t mean Cortana is going away, and a recent acquisition might show the direction of her future. Microsoft recently acquired Semantic Machines, a Berkeley, California-based based taking an innovative approach toward conversational AI.

More than 300,000 developers are currently using Microsoft’s Azure Bot Service, while more than a million are using Microsoft Cognitive Services. Combined with advances made while developing Cortana and Xiaoice, this acquisition poises Microsoft to offer a number of services not offered by competitors. Microsoft won’t develop this technology alone. In addition to bringing on experts from Semantic Machines, Microsoft’s cloud-focused strategy will also enable independent developers to make strides and develop their own use cases.

Defining the AI Future

In tech, it’s natural to get caught up in exciting new technologies coming online and focus on the benefits such advances can bring for society. However, advances brought online in a haphazard manner can have unintended consequences, and it’s important to take a step back and ask questions. In January of 2018, Microsoft released a book entitled “The Future Computed: Artificial Intelligence and Its Role in Society,” which takes a look at progress made over the years and considers the ramifications on society as a whole. Just as most people need computer access to function in society, AI will inevitably become a part of our daily lives.

What will the partnership between computers and humans look like, and who will make decisions about what is appropriate or not? As AI becomes capable of filling more and more jobs, how will society ensure people who worked in these fields are able to move on after being displaced? To craft a framework, the books outlines six areas Microsoft is focusing on: Fairness; reliability and safety; privacy and security; inclusivity; transparency; and accountability. While the conversation will be an ongoing one, Microsoft has shown a willingness to engage the debate in a sophisticated and forward-thinking manner.

The future will be dominated by artificial intelligence, and Microsoft is investing heavily in new fields to help businesses and consumers alike. Furthermore, these moves will offer compelling reasons to use Microsoft’s cloud offerings, enabling businesses ranging from from enterprises to startups to develop ideas quickly and bring them to the market in a scalable manner. Combined with Azure’s growth, focusing on AI gives Microsoft a means to provide advantages over its competitors and shows how its move toward cloud services is proving to be a wise one. The future of AI is effectively impossible to predict, so it’s hard to determine which technologies will thrive. However, the wide range of technologies Microsoft is investing in should place it in position to take advantage of the breakthroughs that end up have a dramatic effect on society.

Big Money Automation: RPA in Financial Services

Double Exposure Image Of Financial Graph And Virtual Human 3Dill

Finance and technology have a strange relationship in the current industry which is changing rapidly

While much of the tech innovation is driven by financial institutions looking for ways to increase the bottom line, many such institutions still rely on outdated legacy systems and a lot of manual processing and checking. In this context, the development of business-ready RPA in the financial sector could have a large impact on profitability, as evidenced by the Bank of England chief Mark Carney’s prediction that 15% of finance roles could be phased out in the coming years by robotic processes.

So what is RPA, the practice at the heart of automation in finance? As Leslie Willcocks, researcher at LSE School of Management, said in an interview with McKinsey: “RPA takes the robot out of the human. The average knowledge worker employed on a back-office process has a lot of repetitive, routine tasks that are dreary and uninteresting.

RPA is a type of software that mimics the activity of a human being in carrying out a task within a process. It can do repetitive stuff more quickly, accurately, and tirelessly than humans, freeing them to do other tasks requiring human strengths such as emotional intelligence, reasoning, judgment, and interaction with the customer.”

The nuts and bolts of how RPA works is quite complex, but the “need to know” aspect is that bots can now track and mimic human behaviour to learn how to create automated processes for these tasks, and this can be carried out quickly and easily without the need for a programmer to write scripts and establish the rules themselves.

Applications of RPA in finance

Accounts receivable: This might be one of the clearest use cases for RPA in the finance function. Anyone with experience of SAP Finance software will recognise the laborious tasks involved in dunning and other debtor management business processes, which can require strictly defined but at times varying steps to be taken when looking for payment. While it has been difficult to program software to carry out these tasks before now, the advent of machine learning allows RPA solutions to track and build rules based on how clerical staff process these routine tasks.

Investment management: There is a sweet spot for robo-advisors in the investment advisory market. While investors with large amounts of capital will always benefit from the fees paid to investment managers, this personal advice is out of reach for most everyday investors, the ability of robo-advisors to give generic but useful advice to investors based on their portfolio profile could benefit a large and currently underserved market.

Managing technological migration and legacy systems, data management: RPA is not only useful for everyday operation, it can also aid in business change. As we mentioned above, many financial institutions are wrestling with outdated systems that require herculean efforts especially with regards to data migration. This can result in tech transformations involving as much manual input as the old systems they are in the process of replacing.

One insurance company in the Caribbean, Guardian Group, successfully implemented RPA for streamlining and speeding up its move to a new suite of tools. RPA was able to “access, calculate, copy, paste, or use embedded business rules to interpret, use, and enter data into the core enterprise application.” Based on how clerical staff navigate between systems, inputting and copying data as they go, RPA can learn the optimal routes between diverse arrays of systems.

Insurance policy creation: For insurers, a lot of new customers require tailor-made policies based on their circumstances. However, the majority of new policies could be considered “boilerplate”, not requiring much human expertise. For these a-few-sizes-fit-all policies, RPA can establish best practices and provide these policies to the right customers with minimal human oversight.

Verifying the claims management process for insurers: Further along the insurance lifecycle, claims management often requires repetitive checking by clerical staff, checking payments and documentation from various systems which is hard to build scripts for since it requires a heterogeneous set of systems consulted. As with other RPA applications, machine learning can easily establish work processes for this kind of activity.

Regulation compliance and reporting, KYC/AML checks: Since much of compliance requirements are obviously highly rule-based, this area is ripe for RPA applications. KYC/AML checks, for example can differ a lot in their implementation but are not heterogenous enough to preclude building an automated workflow using RPA tools.

These usecases should give a grasp of the basic functioning of RPA in finance, as well as the potential it holds. The broader context of this change is important in terms of measuring the impact and dynamics of RPA-led business processes. Other breakthroughs like the use of chatbots, process mining, and cognitive computing are playing their role in the adoption of RPA. In turn, this wave of new tools will result in a new workforce of professionals that work with these tools to realize increased productivity and efficiency.

The advent of AI tools like RPA will not just involve a one-for-one replacement of human action with machine action, the reality is that business processes (and business models in some cases) will in turn need to be reconsidered based on the new options available. More and more of the workforce will be employed in leveraging the capabilities of these tools, and less will work in the processing of repetitive tasks.

The True Cost of Cybercrime

In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence, Big Data Mining, Neural Network, Surveillance Project.

The incidence and cost of cybercrime is skyrocketing, and businesses are having trouble keeping up

According to research from Accenture, the cost of cybercrime increased by 27 percent between 2016 and 2017, and the average cost of cybersecurity, on an annualized basis, stands at $11.7 million. As the world becomes more connected, and more data is stored, these costs are only expected to rise, according to the report. Furthermore, the public now pays attention to data compromises more than ever before, and the reputation hit companies take when their systems are compromised has risen significantly. Large players in the industry are taking note, and companies including Microsoft and HPE are taking steps to help mitigate damage. Kyle Todd, HPE’s Microsoft Category Leader, recently explained some improvements being made.

Cybercrime 101: Just the Facts

Other facts outlined in Accenture report noted that ransomware attacks, perhaps the most lucrative form of cybercrime, doubled between 2016 and 2017. Furthermore, companies spend approximately 3.8 percent of their IT budgets on security, a figure that dropped from 4 percent in 2014. Perhaps most concerning, 56 percent of executives state that their response to security is reactive instead of proactive. Todd outlines how this approach leaves companies vulnerable, as it typically takes only 24 to 48 hours for cybercriminals to compromise systems, and they can go undetected for an average of 100 days or even longer. These undetected intrusions allow cybercriminals to collect more and more data, and they can lead to compromises of other systems. Furthermore, undetected intrusions let attackers plan their next attacks, so ransomware attacks, for example, might be even more expensive to resolve.

Credential Guard

Some of the most powerful tools for protecting data come included in Windows Server 2016. Credential Guard, in particular, should be a central technology for those relying on Windows Server. Modern secure computing relies on digital hashes as an improvement over passwords, and eliminating the need for passwords significantly reduces potential attack vectors. However, cybercriminals can actually use the hashes in place of system passwords, giving them virtual keys to data. Because hashes are so well trusted, compromises often go undetected for extended periods of time. Once they’re able to access domain admin privileges through compromised hashes, the entire system is completely compromised. Credential Guard includes a number of integrated safeguards to prevent these attacks, which can be some of the most difficult to detect and recover from. By cutting off these attacks through Credential Guard, companies can fend off attackers focusing on the most popular intrusion techniques.

Just Enough Administration and Just-in-Time Administration

Historically, admin accounts often have full range over systems, with only small limitations put in place. Just Enough Administration offers a more sophisticated approach, so even if an admin account is compromised, would-be hackers will find themselves with very few privileges, significantly limiting their ability to inflict damage or steal data. Just-in-Time administration fixes the problem of admin creep. When users are given admin privileges to perform certain tasks, these privileges are rarely taken away. The Just-in-Time approach ensures accounts that don’t need ongoing admin privileges don’t serve as attack vectors. By being able to remove access to admin functions, Just-in-Time administration provides a more fine-grained approach to information access.

Device Guard and Enhanced Auditing Capabilities

We use more and more devices and device classes than ever before, which creates an array of potential attacks. Device Guard is used to create policies that restrict the ability of a hacker to install malware that could make an entire datacenter vulnerable to attack. Enhanced Auditing Capabilities serve as a powerful complement. Malicious actors can often fly under the radar while compromising a system, as potential signs of intrusion would be ignored as noise. Enhanced Auditing Capabilities can seek out these signs, giving companies the ability to react promptly and prevent damage.

HPE Gen10 Server Security: Silicon Root of Trust

Network security is at the forefront of keeping systems safe. However, hackers are moving toward targeting system BIOS and firmware, creating ways to infiltrate systems that won’t be caught by firewalls and other technology. Instead of viewing bits of firmware as independent units, HPE has created a cohesive web of firmware that works in an integrated manner. The Silicon Root of Trust analyzes the fingerprint created by a system’s firmware, and this fingerprint is regularly measured so the attacked area can be isolated and administrators can be alerted instantly if the critical firmware has been compromised. As the line between hardware and software becomes less clear, focusing on hardware security is becoming even more important. The Silicon Root of Trust serves as a powerful top-down means of monitoring for intrusions and mitigating potential harm.

HPE Gen10 Server security: HPE Secure Compute Lifecycle

The National Institute of Standards and Technology stands at the forefront of developing systems safe from cybersecurity attacks, but the complexity of the technology they develop, and potential associated costs, means adoption has been fairly slow. Their advances include both standards for software and hardware, and systems that meet these standards can be assured of having state-of-the-art capabilities. HPE  is the only system vendor that has invested in the  high levels of security provided by NIST, at costs that are competitive with systems that don’t comply with these standards.  . Focusing on the well-funded results of NIST research makes HPE a clear choice for companies looking for the utmost in security. NIST also follows guidelines for ensuring data on used storage devices is scrubbed in such a way that it can’t be recovered, even with the most sophisticated tools available.

For those outside of IT operations, the solution to cybercrime seems simple: Just spend more money. Those who work in datacenters and make decisions, however, realize that budgets can’t keep rising forever, and what’s needed is a smart approach that takes advantage of modern security practices. HPE is focused on delivering the highest levels of security for their customers, but they’re also mindful of the typical budgets companies can afford. Security is a critical investment, and using contemporary approaches to both hardware and software security can prevent the cost and embarrassment of having a system compromised.

Gatwick Airport embraces IoT and Machine Learning

GATWICK -®JMilstein 05

As the eighth busiest airport in Europe and the largest single-runway airport in the world, London Gatwick Airport is an essential fixture for international travellers

In an effort to keep up with the demands of the digital world, Gatwick has recently announced the modernization its IT infrastructure, in partnership with Hewlett Packard Enterprise and Aruba.

Even though typical IT upgrades in airports take four years, Gatwick’s network was upgraded in just 18 months, all while avoiding downtime and instability. Work was completed overnight with just a 2 hour window for upgrades and 2 hours to roll back to the legacy network. Data links were limited with Gatwick’s old IT infrastructure, but the net network contains a cleaner meshed design providing up to 10 times more data connections. As new technologies continue emerging for consumers, the airport’s management, and the airlines as well as businesses in the airport who rely on their infrastructure, Gatwick will provide a robust backbone.

We speak to their CIO, Cathal Corcoran and Hewlett Packard Enterprise UK&I MD, Marc Waters below.

Most busy international airports have several runways and ample real estate. Gatwick, on the other hand must operate with a single runway and limited space. Maximizing efficiency is key to ensuring the airport is able to serve the needs of the UK. IoT enabled heat sensors will track movement and how busy the airport is, allowing management to better utilize their resources and improve the passenger journey through the airport. Tracking data lets the airport handle logistical issues that can’t be solved through expansion, ensuring a smoother and more efficient experience for customers and a better business foundation for airlines that operate in Gatwick.

World-Class WiFi

Smartphones, laptops, and entertainment devices have made the time-consuming process of air travel more tolerable and more productive, but serving such a large number of travellers in small spaces is a major challenge. Those in Gatwick can expect typical speeds of 30mbps, providing plenty of bandwidth for working online or streaming video while waiting. Fast and stable WiFi also provides smoother operations for airlines and other companies, enabling them to focus on offering excellent and affordable service without having to worry about outages or sluggish speeds.

Experts are good at finding great ways to utilize limited resources, which is particularly important at Gatwick. When aided by IT however, they can do even more. Machine-learning can detect busy areas in the airport through smartphones and tracking these results over the long term can provide key insights into optimizing day-to-day operations. When making decisions, Gatwick’s management will be aided with powerful data that can provide insights not attainable with more traditional technologies, and new the IT infrastructure will be a key to this analysis. Facial recognition technology will boost security as well as track late passengers, and personalized services based on smartphones or wearable technology can provide valuable updates to travellers on a personal level.

Gatwick Airport embraces IoT and Machine Learning TechNative
©Paul Prescott

Dealing with lost baggage can be a time-consuming and often stressful process. Armed with its new IT infrastructure, Gatwick and its airline operators are poised to offer a better alternative. Being able to track luggage and its owners creates new opportunities for simplifying the check-in and baggage claim process, helping get travellers in and out the the airport in a prompt and seamless manner.

Around 45 million people travel through Gatwick each year, and the airport’s unique constraints make operation an ongoing challenge. However, new technology offers tremendous promise that will serve Gatwick well for passengers today, and robust infrastructure provides a solid foundation for testing and implementing new technology for years to come.

Just announced! Latest addition to HPE SimpliVity provides support for Microsoft Hyper-V


HPE recently announced the latest addition to the HPE SimpliVity hyperconverged portfolio — HPE SimpliVity 380 with Microsoft Hyper-V.

This new addition allows HPE to offer a wider range of options for customer’s multi-hypervisor capabilities — delivering a more holistic hyperconverged offering with added management, optimization and intelligence options, and flexibility.

HPE SimpliVity 380 with Microsoft Hyper-V provides businesses with an easier IT infrastructure solution, simplifying the data center by converging servers, storage, and storage networking into one simple to manage, software-defined platform. The result is the increased business agility and economics of the cloud in an on-premises solution. The pre-integrated, all-flash, hyperconverged building block combines all infrastructure and advanced data services for virtualized workloads—including VM-centric management and mobility, data protection, and guaranteed data efficiency.

Below HPE’s Thomas Goepel looks at the architecture that’s bringing Hyper V to HPE SimpliVity.

HPE SimpliVity 380 now enables customers to deploy the industry’s most powerful hyperconverged platform with either Microsoft Hyper-V or VMWare vSphere private clouds.  The latest update was designed to provide customers with more multi-hypervisor support choices together with the benefits of HPE SimpliVity:

VM centric management and mobility: HPE SimpliVity hyperconvergence enables policy-based, VM-centric management abstracted from the underlying hardware to simplify day-to-day operations and enable the secure digital workspace.  It also provides seamless application and data mobility, empowering end users and driving increased productivity while increasing efficiencies and reducing costs.

Data protection: HPE SimpliVity hyperconverged solution delivers built-in backup and recovery at no additional cost.  These data protection features include the resilience, built-in backup, and bandwidth-efficient replication needed to ensure the highest levels of data integrity and availability, eliminating legacy data protection.

Data efficiency: HPE SimpliVity RapidDR automates the inherent data efficiencies of HPE SimpliVity hyperconverged infrastructure, slashing recovery point objectives (RPOs) and recovery time objectives (RTOs) from days or hours to seconds, with a guaranteed 60-second restore for 1TB VM2.

Below, HPE’s Stuart Gilks explains why the move expands hypervisor choice for customer use-cases.

Citrix Ready HCI Workspace Appliance Program

In addition, HPE also extended the partnership with Citrix by integrating the HPE SimpliVity portfolio, including the new HPE SimpliVity 380 with Microsoft Hyper-V, into the Citrix Ready HCI Workspace Appliance Program. The program allows HPE and Citrix to set the standard for customers to easily deploy digital workspaces with multi hypervisor, multi-cloud flexibility, resulting in world-class digital collaboration and a borderless, productive workplace.

About Chris Purcell

Just announced! Latest addition to HPE SimpliVity provides support for Microsoft Hyper-V TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. For more information about HPE SimpliVity 380 with Microsoft Hyper-V, click here. To learn more about how hyperconvergence can simplify your datacenter, download the free e-book, Hyperconvergence for Dummies.

To read more articles from Chris Purcell, check out the HPE Converged Data Center Infrastructure blog.


Blockchain: A Chaotic Ecosystem?

Block chain network

After a couple of years of speculation, Blockchain and cryptotechnology finally received substantial mainstream attention in 2017

Governments got on board, ICOs progressed at a breakneck speed, and the market cap of the entire industry skyrocketed accordingly. The keyword now driving many discussions in the domain is “mainstream adoption”, with investors, blockchain entrepreneurs, and mainstream tech companies trying to predict where the market is going to go.

It is an important time for the industry, since first-mover advantage is likely to set the scene for blockchain platform selection in the next 10 years. Just as how Microsoft staked their claim to a massive user base with Windows, or how Google has amassed a fortune off the popularity of its search engine, the blockchain solutions that get the first foot in the door of the new tech ecosystem will be set up for future dominance.

However, it is anyone’s guess how this will shake out. The industry now is quite frankly a mess, with tech incumbents slow to get up to speed while the blockchain-native projects are beset by controversy and uncertainty as well as a seemingly never-ending proliferation of new competitors holding ICOs every week. Here we will briefly outline the major platforms and protocols in the oncoming scrap to be the king of the blockchain jungle.

The crypto-native heavyweights: Ethereum, Ripple, Stellar, Hyperledger, (R3) Corda and others
Given the seemingly endless applications of blockchain technology, there is a broad and blurry bracket of crypto protocols offering similar solutions. These can range from smart contract development in the case of Ethereum, blockchain architecture in the case of Hyperledger, or Ripple as a payment protocol. Here we’ll sketch out what each does.


Ethereum has usurped Bitcoin in many ways as the flagship of the crypto industry. Ethereum uses protocols and a blockchain to enable users to create complex transactions and smart contracts, using a Virtual Machine to automate in a transparent and decentralised way much of what is run on proprietary servers now. With the launch of the Enterprise Ethereum Alliance, the world’s largest open source blockchain initiative, Ethereum is pulling ahead of the pack in the enterprise-friendly stakes.


One of the most notably successful platforms of late, Ripple have developed a cryptographic protocol to enable superfast settling of financial transactions cross-border. It is in many ways a halfway house between crypto and fiat, using cryptographic protocols to enable international fast fiat money transfers. On the 13th of April Bank Santander release an app for money transfer which uses the Ripple protocol, and the platform allows third-party developers to use their open source framework and blockchain to integrate fast payments in their solutions.

The Ripple platform also includes a token (XRP), which doesn’t actually have to be used in Ripple protocol based apps.


The Stellar protocol grew out of Ripple, and is run by the Stellar Foundation. It is primarily targetting international value transfers like Ripple does, except it is more focussed on transfers using its own token and allowing third parties to process payments quickly and cheaply. Early users being targetted are non-profits, third world organisations, and ecommerce. One interesting thing about the platform is IBM’s involvement, and it seems IBM will lean on Stellar for a lot of its blockchain solutions in the future.


The Hyperledger project created by the Linux Foundation is focussed more on the building blocks (so to speak) of ledger technology. Companies intending to create their own crypto solutions or private blockchains can utilise the Hyperledger protocol and open source stack as tools. Intel are a notable partner, having developed the Hyperledger Sawtooth protocol to help speed up blockchain transactions.


Created by R3 (a consortium of major financial institutions), Corda aims to provide a range of business-ready blockchain solutions mainly focussed on financial transactions but also applicable to supply-chain transparency and securities.

Tech companies getting in on the action

Notable in their absence, Google has not had much involvement in blockchain, nor has Facebook. While some skeptics consider this a negative sign for blockchain technology, others would say that the transparent nature of blockchain is not in the best interests of these companies. Regardless, there are some tech incumbents weighing into the domain.

Meanwhile, Microsoft are ramping up for the release of their Coco Framework on Azure platform which has garnered a lot of positive press for be leveraging existing blockchain-focused consensus mechanisms and ticking all the control boxes enterprises require. Just after it was announced last year, Azure CTO Mark Russinovich spoke to TechNative about the move.

While they haven’t had much involvement, Amazon are working to make blockchain solutions like those mentioned above (especially Corda and Sawtooth) work seamlessly with AWS – unsurprisingly directly positioned against Microsoft’s Coco.

The bottom line: Anyone’s game

While individual businesses have a broad range of options open to them for developing blockchain and ledger solutions, we are really in the early stages of this domain where there is little head to head offerings or clearly defined markets. It will be interesting to see if established companies gain more traction with ledger technology or if the crypto native platforms hold their market share in this nascent industry.

About the Author

Blockchain: A Chaotic Ecosystem? TechNativeEoghan Gannon is senior writer at TechNative, a cryptocurrency researcher and entrepreneur. His interests lie in how blockchain technology is changing business.

1 2 3 6
Page 1 of 6