Microsoft’s AI Roadmap


Digital transformation is in full effect, and giants of the tech industry are investing heavily in new technologies

Due to its nearly limitless potential, artificial intelligence is at the forefront of much of this research, and Microsoft has been making headlines with new technologies, major acquisitions, and innovative ideas. The tech giant has long been moving toward a cloud-based future, and investment in AI is helping solidify its path toward becoming the AI leader in a number of fields. Here are a few technologies Microsoft has invested in recently and the potential impact they’ll have on the company’s future and society as a whole.

Project Brainwave

Traditional computer hardware can perform complex tasks quickly. However, most hardware is tuned primarily for general-purpose performance, and systems that demand effective real-time performance often rely on specialized hardware, as milliseconds saved can be critical in certain scenarios. Microsoft’s Project Brainwave relies on a type of chips known as a field-programmable gate array. FPGA chips can be modified using software, enabling excellent flexibility for honing the chip toward specific applications and making real-time speeds more easily attainable. Integrated into Azure Machine Learning, Project Brainwave will use optical technology to mark potentially defective products, allowing companies to eliminate much of the labor required in quality assurance and allowing employees to focus on the subset of products that are too complex to test with computers alone. Kap Sharma from Hewlett Packard Enterprise recently spoke to us about Project Brainwave.

Better Accessibility

According to the World Bank, more than one billion people around the globe have a disability. For some, disability can have a dramatic effect on being able to use a computer, and, as we all know, computer access is effectively mandatory for a growing portion of the population. With only one in ten people with a disability having access to assistive technologies and products, according to Microsoft’s chief legal officer and president, Brad Smith, the tech field can certainly do better. Microsoft pledged $250 over the next five years to develop better accessibility capabilities, and it’s leaning on AI to better meet the diverse needs of users. Many of these technologies go far beyond what is currently available today: the Seeing AI app is designed to narrate what a user’s phone is seeing, giving it far greater capabilities than screen-readers of the past, and the Helpicto app is designed to help people who have autism better make use of technology.

Transforming Healthcare

Launched in 2017, Microsoft’s Healthcare NExT initiative aims to close the gap between current healthcare systems and the promise of AI and cloud computing, and it’s taking a far-reaching approach. Doctors spend a considerable amount of time taking notes while talking with patients and in between appointments; Project Empower MD, which is being developed in conjunction with UPMC, will listen to what doctors say and observe what they do to help automate certain tasks. Microsoft Genomics will empower medical professionals to tap into the power of Microsoft Azure to perform genetic processing tasks, enabling new types of treatment. Microsoft is also working to make compliance with HIPPA and other regulations simpler, which helps medical professions spend their time improving care for patients and knowing that patient data is properly and legally secured.

Dynamics 365 AI Solutions

Microsoft is working with Accenture to combine Microsoft’s AI Solution for Care with Accenture’s Intelligent Customer Engagement framework to help digital assistants provide better understanding and responsiveness for varying customer needs. Furthermore, Microsoft and Accenture will be working to incorporate Accenture Intelligent Revenue Growth technology into Microsoft’s AI technology, with the goal of using machine learning to helps sales professionals. The goal is to increase the speed by which sales departments can improve research and deployment by a factor of ten, helping companies stay competitive in crowded markets and find use cases more traditional options would otherwise miss out on. Vertically integrated solutions hold tremendous promise, and this partnership between two experienced companies can prove to be beneficial.

Jonathan Fletcher, CTO at insurance giant Hiscox recently spoke to us about their use of the AI tools available with Microsoft Azure.

Human-Computer Interaction

Microsoft’s Cortana voice assistance has proven popular, but it’s not the only interactive tool the company is working on. In China, the sophisticated Xiaoice chatbot has made tremendous strides in recent years, and, with more than 16 channels available on WeChat and other messaging services, the bot has more than 500 million friends across these networks. Microsoft CEO Satya Nadella described the bot as a “bit of a celebrity” in China. Much like Google’s controversial Duplex, Xiaoice also has voice features, but it calls users upon request instead of placing calls on behalf of users. The polished and bright voice it produces has received praise, partially for its artificially bright intonation that isn’t designed to fool people into thinking they’re talking with an actual person while still providing a human-like experience. Microsoft hasn’t yet announced when Xiaoice will come to other regions, but it’s a safe bet the technology will make a splash once it arrives.

Credit scoring platform TransUnion, formerly Call Credit, are working with Microsoft to harness AI technology to improve risk and fight fraud.

Cortana Improvements

Xiaoice’s arrival won’t mean Cortana is going away, and a recent acquisition might show the direction of her future. Microsoft recently acquired Semantic Machines, a Berkeley, California-based based taking an innovative approach toward conversational AI.

More than 300,000 developers are currently using Microsoft’s Azure Bot Service, while more than a million are using Microsoft Cognitive Services. Combined with advances made while developing Cortana and Xiaoice, this acquisition poises Microsoft to offer a number of services not offered by competitors. Microsoft won’t develop this technology alone. In addition to bringing on experts from Semantic Machines, Microsoft’s cloud-focused strategy will also enable independent developers to make strides and develop their own use cases.

Defining the AI Future

In tech, it’s natural to get caught up in exciting new technologies coming online and focus on the benefits such advances can bring for society. However, advances brought online in a haphazard manner can have unintended consequences, and it’s important to take a step back and ask questions. In January of 2018, Microsoft released a book entitled “The Future Computed: Artificial Intelligence and Its Role in Society,” which takes a look at progress made over the years and considers the ramifications on society as a whole. Just as most people need computer access to function in society, AI will inevitably become a part of our daily lives.

What will the partnership between computers and humans look like, and who will make decisions about what is appropriate or not? As AI becomes capable of filling more and more jobs, how will society ensure people who worked in these fields are able to move on after being displaced? To craft a framework, the books outlines six areas Microsoft is focusing on: Fairness; reliability and safety; privacy and security; inclusivity; transparency; and accountability. While the conversation will be an ongoing one, Microsoft has shown a willingness to engage the debate in a sophisticated and forward-thinking manner.

The future will be dominated by artificial intelligence, and Microsoft is investing heavily in new fields to help businesses and consumers alike. Furthermore, these moves will offer compelling reasons to use Microsoft’s cloud offerings, enabling businesses ranging from from enterprises to startups to develop ideas quickly and bring them to the market in a scalable manner. Combined with Azure’s growth, focusing on AI gives Microsoft a means to provide advantages over its competitors and shows how its move toward cloud services is proving to be a wise one. The future of AI is effectively impossible to predict, so it’s hard to determine which technologies will thrive. However, the wide range of technologies Microsoft is investing in should place it in position to take advantage of the breakthroughs that end up have a dramatic effect on society.

Big Money Automation: RPA in Financial Services

Double Exposure Image Of Financial Graph And Virtual Human 3Dill

Finance and technology have a strange relationship in the current industry which is changing rapidly

While much of the tech innovation is driven by financial institutions looking for ways to increase the bottom line, many such institutions still rely on outdated legacy systems and a lot of manual processing and checking. In this context, the development of business-ready RPA in the financial sector could have a large impact on profitability, as evidenced by the Bank of England chief Mark Carney’s prediction that 15% of finance roles could be phased out in the coming years by robotic processes.

So what is RPA, the practice at the heart of automation in finance? As Leslie Willcocks, researcher at LSE School of Management, said in an interview with McKinsey: “RPA takes the robot out of the human. The average knowledge worker employed on a back-office process has a lot of repetitive, routine tasks that are dreary and uninteresting.

RPA is a type of software that mimics the activity of a human being in carrying out a task within a process. It can do repetitive stuff more quickly, accurately, and tirelessly than humans, freeing them to do other tasks requiring human strengths such as emotional intelligence, reasoning, judgment, and interaction with the customer.”

The nuts and bolts of how RPA works is quite complex, but the “need to know” aspect is that bots can now track and mimic human behaviour to learn how to create automated processes for these tasks, and this can be carried out quickly and easily without the need for a programmer to write scripts and establish the rules themselves.

Applications of RPA in finance

Accounts receivable: This might be one of the clearest use cases for RPA in the finance function. Anyone with experience of SAP Finance software will recognise the laborious tasks involved in dunning and other debtor management business processes, which can require strictly defined but at times varying steps to be taken when looking for payment. While it has been difficult to program software to carry out these tasks before now, the advent of machine learning allows RPA solutions to track and build rules based on how clerical staff process these routine tasks.

Investment management: There is a sweet spot for robo-advisors in the investment advisory market. While investors with large amounts of capital will always benefit from the fees paid to investment managers, this personal advice is out of reach for most everyday investors, the ability of robo-advisors to give generic but useful advice to investors based on their portfolio profile could benefit a large and currently underserved market.

Managing technological migration and legacy systems, data management: RPA is not only useful for everyday operation, it can also aid in business change. As we mentioned above, many financial institutions are wrestling with outdated systems that require herculean efforts especially with regards to data migration. This can result in tech transformations involving as much manual input as the old systems they are in the process of replacing.

One insurance company in the Caribbean, Guardian Group, successfully implemented RPA for streamlining and speeding up its move to a new suite of tools. RPA was able to “access, calculate, copy, paste, or use embedded business rules to interpret, use, and enter data into the core enterprise application.” Based on how clerical staff navigate between systems, inputting and copying data as they go, RPA can learn the optimal routes between diverse arrays of systems.

Insurance policy creation: For insurers, a lot of new customers require tailor-made policies based on their circumstances. However, the majority of new policies could be considered “boilerplate”, not requiring much human expertise. For these a-few-sizes-fit-all policies, RPA can establish best practices and provide these policies to the right customers with minimal human oversight.

Verifying the claims management process for insurers: Further along the insurance lifecycle, claims management often requires repetitive checking by clerical staff, checking payments and documentation from various systems which is hard to build scripts for since it requires a heterogeneous set of systems consulted. As with other RPA applications, machine learning can easily establish work processes for this kind of activity.

Regulation compliance and reporting, KYC/AML checks: Since much of compliance requirements are obviously highly rule-based, this area is ripe for RPA applications. KYC/AML checks, for example can differ a lot in their implementation but are not heterogenous enough to preclude building an automated workflow using RPA tools.

These usecases should give a grasp of the basic functioning of RPA in finance, as well as the potential it holds. The broader context of this change is important in terms of measuring the impact and dynamics of RPA-led business processes. Other breakthroughs like the use of chatbots, process mining, and cognitive computing are playing their role in the adoption of RPA. In turn, this wave of new tools will result in a new workforce of professionals that work with these tools to realize increased productivity and efficiency.

The advent of AI tools like RPA will not just involve a one-for-one replacement of human action with machine action, the reality is that business processes (and business models in some cases) will in turn need to be reconsidered based on the new options available. More and more of the workforce will be employed in leveraging the capabilities of these tools, and less will work in the processing of repetitive tasks.

The True Cost of Cybercrime

In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence, Big Data Mining, Neural Network, Surveillance Project.

The incidence and cost of cybercrime is skyrocketing, and businesses are having trouble keeping up

According to research from Accenture, the cost of cybercrime increased by 27 percent between 2016 and 2017, and the average cost of cybersecurity, on an annualized basis, stands at $11.7 million. As the world becomes more connected, and more data is stored, these costs are only expected to rise, according to the report. Furthermore, the public now pays attention to data compromises more than ever before, and the reputation hit companies take when their systems are compromised has risen significantly. Large players in the industry are taking note, and companies including Microsoft and HPE are taking steps to help mitigate damage. Kyle Todd, HPE’s Microsoft Category Leader, recently explained some improvements being made.

Cybercrime 101: Just the Facts

Other facts outlined in Accenture report noted that ransomware attacks, perhaps the most lucrative form of cybercrime, doubled between 2016 and 2017. Furthermore, companies spend approximately 3.8 percent of their IT budgets on security, a figure that dropped from 4 percent in 2014. Perhaps most concerning, 56 percent of executives state that their response to security is reactive instead of proactive. Todd outlines how this approach leaves companies vulnerable, as it typically takes only 24 to 48 hours for cybercriminals to compromise systems, and they can go undetected for an average of 100 days or even longer. These undetected intrusions allow cybercriminals to collect more and more data, and they can lead to compromises of other systems. Furthermore, undetected intrusions let attackers plan their next attacks, so ransomware attacks, for example, might be even more expensive to resolve.

Credential Guard

Some of the most powerful tools for protecting data come included in Windows Server 2016. Credential Guard, in particular, should be a central technology for those relying on Windows Server. Modern secure computing relies on digital hashes as an improvement over passwords, and eliminating the need for passwords significantly reduces potential attack vectors. However, cybercriminals can actually use the hashes in place of system passwords, giving them virtual keys to data. Because hashes are so well trusted, compromises often go undetected for extended periods of time. Once they’re able to access domain admin privileges through compromised hashes, the entire system is completely compromised. Credential Guard includes a number of integrated safeguards to prevent these attacks, which can be some of the most difficult to detect and recover from. By cutting off these attacks through Credential Guard, companies can fend off attackers focusing on the most popular intrusion techniques.

Just Enough Administration and Just-in-Time Administration

Historically, admin accounts often have full range over systems, with only small limitations put in place. Just Enough Administration offers a more sophisticated approach, so even if an admin account is compromised, would-be hackers will find themselves with very few privileges, significantly limiting their ability to inflict damage or steal data. Just-in-Time administration fixes the problem of admin creep. When users are given admin privileges to perform certain tasks, these privileges are rarely taken away. The Just-in-Time approach ensures accounts that don’t need ongoing admin privileges don’t serve as attack vectors. By being able to remove access to admin functions, Just-in-Time administration provides a more fine-grained approach to information access.

Device Guard and Enhanced Auditing Capabilities

We use more and more devices and device classes than ever before, which creates an array of potential attacks. Device Guard is used to create policies that restrict the ability of a hacker to install malware that could make an entire datacenter vulnerable to attack. Enhanced Auditing Capabilities serve as a powerful complement. Malicious actors can often fly under the radar while compromising a system, as potential signs of intrusion would be ignored as noise. Enhanced Auditing Capabilities can seek out these signs, giving companies the ability to react promptly and prevent damage.

HPE Gen10 Server Security: Silicon Root of Trust

Network security is at the forefront of keeping systems safe. However, hackers are moving toward targeting system BIOS and firmware, creating ways to infiltrate systems that won’t be caught by firewalls and other technology. Instead of viewing bits of firmware as independent units, HPE has created a cohesive web of firmware that works in an integrated manner. The Silicon Root of Trust analyzes the fingerprint created by a system’s firmware, and this fingerprint is regularly measured so the attacked area can be isolated and administrators can be alerted instantly if the critical firmware has been compromised. As the line between hardware and software becomes less clear, focusing on hardware security is becoming even more important. The Silicon Root of Trust serves as a powerful top-down means of monitoring for intrusions and mitigating potential harm.

HPE Gen10 Server security: HPE Secure Compute Lifecycle

The National Institute of Standards and Technology stands at the forefront of developing systems safe from cybersecurity attacks, but the complexity of the technology they develop, and potential associated costs, means adoption has been fairly slow. Their advances include both standards for software and hardware, and systems that meet these standards can be assured of having state-of-the-art capabilities. HPE  is the only system vendor that has invested in the  high levels of security provided by NIST, at costs that are competitive with systems that don’t comply with these standards.  . Focusing on the well-funded results of NIST research makes HPE a clear choice for companies looking for the utmost in security. NIST also follows guidelines for ensuring data on used storage devices is scrubbed in such a way that it can’t be recovered, even with the most sophisticated tools available.

For those outside of IT operations, the solution to cybercrime seems simple: Just spend more money. Those who work in datacenters and make decisions, however, realize that budgets can’t keep rising forever, and what’s needed is a smart approach that takes advantage of modern security practices. HPE is focused on delivering the highest levels of security for their customers, but they’re also mindful of the typical budgets companies can afford. Security is a critical investment, and using contemporary approaches to both hardware and software security can prevent the cost and embarrassment of having a system compromised.

Gatwick Airport embraces IoT and Machine Learning

GATWICK -®JMilstein 05

As the eighth busiest airport in Europe and the largest single-runway airport in the world, London Gatwick Airport is an essential fixture for international travellers

In an effort to keep up with the demands of the digital world, Gatwick has recently announced the modernization its IT infrastructure, in partnership with Hewlett Packard Enterprise and Aruba.

Even though typical IT upgrades in airports take four years, Gatwick’s network was upgraded in just 18 months, all while avoiding downtime and instability. Work was completed overnight with just a 2 hour window for upgrades and 2 hours to roll back to the legacy network. Data links were limited with Gatwick’s old IT infrastructure, but the net network contains a cleaner meshed design providing up to 10 times more data connections. As new technologies continue emerging for consumers, the airport’s management, and the airlines as well as businesses in the airport who rely on their infrastructure, Gatwick will provide a robust backbone.

We speak to their CIO, Cathal Corcoran and Hewlett Packard Enterprise UK&I MD, Marc Waters below.

Most busy international airports have several runways and ample real estate. Gatwick, on the other hand must operate with a single runway and limited space. Maximizing efficiency is key to ensuring the airport is able to serve the needs of the UK. IoT enabled heat sensors will track movement and how busy the airport is, allowing management to better utilize their resources and improve the passenger journey through the airport. Tracking data lets the airport handle logistical issues that can’t be solved through expansion, ensuring a smoother and more efficient experience for customers and a better business foundation for airlines that operate in Gatwick.

World-Class WiFi

Smartphones, laptops, and entertainment devices have made the time-consuming process of air travel more tolerable and more productive, but serving such a large number of travellers in small spaces is a major challenge. Those in Gatwick can expect typical speeds of 30mbps, providing plenty of bandwidth for working online or streaming video while waiting. Fast and stable WiFi also provides smoother operations for airlines and other companies, enabling them to focus on offering excellent and affordable service without having to worry about outages or sluggish speeds.

Experts are good at finding great ways to utilize limited resources, which is particularly important at Gatwick. When aided by IT however, they can do even more. Machine-learning can detect busy areas in the airport through smartphones and tracking these results over the long term can provide key insights into optimizing day-to-day operations. When making decisions, Gatwick’s management will be aided with powerful data that can provide insights not attainable with more traditional technologies, and new the IT infrastructure will be a key to this analysis. Facial recognition technology will boost security as well as track late passengers, and personalized services based on smartphones or wearable technology can provide valuable updates to travellers on a personal level.

Gatwick Airport embraces IoT and Machine Learning TechNative
©Paul Prescott

Dealing with lost baggage can be a time-consuming and often stressful process. Armed with its new IT infrastructure, Gatwick and its airline operators are poised to offer a better alternative. Being able to track luggage and its owners creates new opportunities for simplifying the check-in and baggage claim process, helping get travellers in and out the the airport in a prompt and seamless manner.

Around 45 million people travel through Gatwick each year, and the airport’s unique constraints make operation an ongoing challenge. However, new technology offers tremendous promise that will serve Gatwick well for passengers today, and robust infrastructure provides a solid foundation for testing and implementing new technology for years to come.

Just announced! Latest addition to HPE SimpliVity provides support for Microsoft Hyper-V


HPE recently announced the latest addition to the HPE SimpliVity hyperconverged portfolio — HPE SimpliVity 380 with Microsoft Hyper-V.

This new addition allows HPE to offer a wider range of options for customer’s multi-hypervisor capabilities — delivering a more holistic hyperconverged offering with added management, optimization and intelligence options, and flexibility.

HPE SimpliVity 380 with Microsoft Hyper-V provides businesses with an easier IT infrastructure solution, simplifying the data center by converging servers, storage, and storage networking into one simple to manage, software-defined platform. The result is the increased business agility and economics of the cloud in an on-premises solution. The pre-integrated, all-flash, hyperconverged building block combines all infrastructure and advanced data services for virtualized workloads—including VM-centric management and mobility, data protection, and guaranteed data efficiency.

Below HPE’s Thomas Goepel looks at the architecture that’s bringing Hyper V to HPE SimpliVity.

HPE SimpliVity 380 now enables customers to deploy the industry’s most powerful hyperconverged platform with either Microsoft Hyper-V or VMWare vSphere private clouds.  The latest update was designed to provide customers with more multi-hypervisor support choices together with the benefits of HPE SimpliVity:

VM centric management and mobility: HPE SimpliVity hyperconvergence enables policy-based, VM-centric management abstracted from the underlying hardware to simplify day-to-day operations and enable the secure digital workspace.  It also provides seamless application and data mobility, empowering end users and driving increased productivity while increasing efficiencies and reducing costs.

Data protection: HPE SimpliVity hyperconverged solution delivers built-in backup and recovery at no additional cost.  These data protection features include the resilience, built-in backup, and bandwidth-efficient replication needed to ensure the highest levels of data integrity and availability, eliminating legacy data protection.

Data efficiency: HPE SimpliVity RapidDR automates the inherent data efficiencies of HPE SimpliVity hyperconverged infrastructure, slashing recovery point objectives (RPOs) and recovery time objectives (RTOs) from days or hours to seconds, with a guaranteed 60-second restore for 1TB VM2.

Below, HPE’s Stuart Gilks explains why the move expands hypervisor choice for customer use-cases.

Citrix Ready HCI Workspace Appliance Program

In addition, HPE also extended the partnership with Citrix by integrating the HPE SimpliVity portfolio, including the new HPE SimpliVity 380 with Microsoft Hyper-V, into the Citrix Ready HCI Workspace Appliance Program. The program allows HPE and Citrix to set the standard for customers to easily deploy digital workspaces with multi hypervisor, multi-cloud flexibility, resulting in world-class digital collaboration and a borderless, productive workplace.

About Chris Purcell

Just announced! Latest addition to HPE SimpliVity provides support for Microsoft Hyper-V TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. For more information about HPE SimpliVity 380 with Microsoft Hyper-V, click here. To learn more about how hyperconvergence can simplify your datacenter, download the free e-book, Hyperconvergence for Dummies.

To read more articles from Chris Purcell, check out the HPE Converged Data Center Infrastructure blog.


Blockchain: A Chaotic Ecosystem?

Block chain network

After a couple of years of speculation, Blockchain and cryptotechnology finally received substantial mainstream attention in 2017

Governments got on board, ICOs progressed at a breakneck speed, and the market cap of the entire industry skyrocketed accordingly. The keyword now driving many discussions in the domain is “mainstream adoption”, with investors, blockchain entrepreneurs, and mainstream tech companies trying to predict where the market is going to go.

It is an important time for the industry, since first-mover advantage is likely to set the scene for blockchain platform selection in the next 10 years. Just as how Microsoft staked their claim to a massive user base with Windows, or how Google has amassed a fortune off the popularity of its search engine, the blockchain solutions that get the first foot in the door of the new tech ecosystem will be set up for future dominance.

However, it is anyone’s guess how this will shake out. The industry now is quite frankly a mess, with tech incumbents slow to get up to speed while the blockchain-native projects are beset by controversy and uncertainty as well as a seemingly never-ending proliferation of new competitors holding ICOs every week. Here we will briefly outline the major platforms and protocols in the oncoming scrap to be the king of the blockchain jungle.

The crypto-native heavyweights: Ethereum, Ripple, Stellar, Hyperledger, (R3) Corda and others
Given the seemingly endless applications of blockchain technology, there is a broad and blurry bracket of crypto protocols offering similar solutions. These can range from smart contract development in the case of Ethereum, blockchain architecture in the case of Hyperledger, or Ripple as a payment protocol. Here we’ll sketch out what each does.


Ethereum has usurped Bitcoin in many ways as the flagship of the crypto industry. Ethereum uses protocols and a blockchain to enable users to create complex transactions and smart contracts, using a Virtual Machine to automate in a transparent and decentralised way much of what is run on proprietary servers now. With the launch of the Enterprise Ethereum Alliance, the world’s largest open source blockchain initiative, Ethereum is pulling ahead of the pack in the enterprise-friendly stakes.


One of the most notably successful platforms of late, Ripple have developed a cryptographic protocol to enable superfast settling of financial transactions cross-border. It is in many ways a halfway house between crypto and fiat, using cryptographic protocols to enable international fast fiat money transfers. On the 13th of April Bank Santander release an app for money transfer which uses the Ripple protocol, and the platform allows third-party developers to use their open source framework and blockchain to integrate fast payments in their solutions.

The Ripple platform also includes a token (XRP), which doesn’t actually have to be used in Ripple protocol based apps.


The Stellar protocol grew out of Ripple, and is run by the Stellar Foundation. It is primarily targetting international value transfers like Ripple does, except it is more focussed on transfers using its own token and allowing third parties to process payments quickly and cheaply. Early users being targetted are non-profits, third world organisations, and ecommerce. One interesting thing about the platform is IBM’s involvement, and it seems IBM will lean on Stellar for a lot of its blockchain solutions in the future.


The Hyperledger project created by the Linux Foundation is focussed more on the building blocks (so to speak) of ledger technology. Companies intending to create their own crypto solutions or private blockchains can utilise the Hyperledger protocol and open source stack as tools. Intel are a notable partner, having developed the Hyperledger Sawtooth protocol to help speed up blockchain transactions.


Created by R3 (a consortium of major financial institutions), Corda aims to provide a range of business-ready blockchain solutions mainly focussed on financial transactions but also applicable to supply-chain transparency and securities.

Tech companies getting in on the action

Notable in their absence, Google has not had much involvement in blockchain, nor has Facebook. While some skeptics consider this a negative sign for blockchain technology, others would say that the transparent nature of blockchain is not in the best interests of these companies. Regardless, there are some tech incumbents weighing into the domain.

Meanwhile, Microsoft are ramping up for the release of their Coco Framework on Azure platform which has garnered a lot of positive press for be leveraging existing blockchain-focused consensus mechanisms and ticking all the control boxes enterprises require. Just after it was announced last year, Azure CTO Mark Russinovich spoke to TechNative about the move.

While they haven’t had much involvement, Amazon are working to make blockchain solutions like those mentioned above (especially Corda and Sawtooth) work seamlessly with AWS – unsurprisingly directly positioned against Microsoft’s Coco.

The bottom line: Anyone’s game

While individual businesses have a broad range of options open to them for developing blockchain and ledger solutions, we are really in the early stages of this domain where there is little head to head offerings or clearly defined markets. It will be interesting to see if established companies gain more traction with ledger technology or if the crypto native platforms hold their market share in this nascent industry.

About the Author

Blockchain: A Chaotic Ecosystem? TechNativeEoghan Gannon is senior writer at TechNative, a cryptocurrency researcher and entrepreneur. His interests lie in how blockchain technology is changing business.

8 AI Use Cases Every CxO Should Know About

Woman person stands at lake with cyberspace background

Artificial intelligence is changing from promising concept to a core technology businesses will need to use going forward

While reading about tools and frameworks can give C-level decision-makers ideas for incorporating the technology, it’s also worth exploring how others are making use of AI; after all, it’s important to close gaps created by your AI-adopting competitors. Here are a few AI use cases executives need to explore.

Fraud and Theft Detection

Dealing with fraud is all but inevitable when working in the financial sector. Retail stores assume some amount of theft as part of doing business. Classic monitoring is still essential for detecting fraud and theft, but AI systems can go a step further and detect problems while they’re underway. Modern AI systems are trainable and when given one or more sets of data, they can detect patterns that are hard to spot using more traditional techniques. By feeding systems banks of legal and banks of fraudulent data, these systems can be trained to detect fraud as it occurs. Likewise, AI in retail can detect popular locations for theft and, in some cases, even determine the time when theft is most likely to occur. This data gives store owners and managers useful information for improved store monitoring.

The AI-Enabled Office

Today’s offices are filled with technology. However, it’s easy for employees to become overwhelmed with technology, and the urge to multitask can lead to poorer performance. AI-based voice assistants and robotic processes can let employees disconnect from monitors for a moment to complete tasks. AI can also serve a purpose in intra-office communication; AI can alert users when changes are made to collaborative works, and it can help route information to the employees who need it most. AI is already being used for time management systems, and it can help employees schedule their days more effectively. The smart office is coming along, piece by piece. For executives and others focused on employee productivity and satisfaction, it’s important to take a more holistic view of AI and find out how it can be integrated to create a truly modern workplace.

We recently spoke to Mihir Shukla, CEO and Co-Founder of one of the market leaders in robotic process automation, Automation Anywhere.

Customer Service

Telephone helplines of the past have a well-earned reputation for being hard to use, and their infamy has poisoned the well when it comes to automated assistance. However, AI is increasingly playing a role in customer service, and recent reviews of modern technology are far more positive. By reading through past interactions with customers, modern systems are better able to sort through human language and determine what the customer is asking for. In addition, these systems can improve over time, as each new interaction provides more data to improve performance. While some customers will always prefer to speak with a human, many people claim to prefer interacting with a chatbot or other automated interface. Although these bots will lack the so-called human touch for some time, they can access large databases that are easy to update instantly, which can lead to better overall assistance.


Jobs throughout much of the developed world are becoming more intellectually demanding, and they’re becoming more specialized. Even though online recruiting and online resumes have helped companies connect with potential employees, the task of hiring the right person has become more of a challenge. AI tools are better able to sort through potential candidates and provide valuable heuristics to recruiters. Benefits don’t end after an employee is hired: AI systems are valuable tools for onboarding new hires, and they can even replace a number of human resources tasks. Companies don’t have to develop all of this technology internally, as a number of companies now provide AI-based recruiting services that can perform nearly everything up to an in-person interview, letting HR employees spend their time on other tasks.

Logistics Management

The value of computer technology is clear when it comes to logistics, and this is especially true when it comes to dealing with warehouses and shipping. As more and more customers expect free shipping, reducing logistics costs will be essential. Computerized inventory management and tracking is effectively mandatory in many fields, but AI is able to use this data to provide useful information and let companies find means of cutting costs. These cost savings tend to be incremental, but they add up over time. One area in which AI excels is finding counter-intuitive information. Shipping routes that no human or traditional computer system would even consider might have some unexpected benefits that machine learning can uncover. Furthermore, there may be subtle annual trends that are easy to miss in the data, and AI systems might be able to uncover unforeseen yet reliable trends.


The internet is invaluable for businesses of all sizes. With the power of the internet connectivity, however, comes a number of risks, as hacks can be expensive and lead to lost customers and a drop in reputation. The fundamentals of cybersecurity haven’t changed with the dawn of AI, but AI-based tools are better able to detect threats as they occur and take preemptive steps to halt would-be attackers. AI tools are also able to audit systems and find bugs that humans might miss. It’s worth noting that hackers are often among the first to adopt new technology, and it stands to reason that AI will lead to more sophisticated attacks in the coming years. It’s important to first focus on basic security, but executives should explore if AI-enabled cybersecurity is worth the investment.

Callsign’s Intelligence Driven Authentication (IDA) protects identities and defends against data breaches. We recently spoke to their commercial VP Daniel Grimes.

Energy Management

Computing has long-assisted energy management, from power stations down to the lines to customers’ homes and businesses. AI provides even better capabilities for detecting power usage patterns and ensuring data is routed properly. Multiple plants can share data to optimize their operations, as can regulators. Customers can benefit as well: AI-derived data can inform customers about their energy usage and to help them make more informed decisions about their consumption. For C-level decision-makers, energy management serves as a shining example of how cooperation, even among competitors, can make better use of resources. Two companies depending on shared resources can often save on expenses by combining their efforts to provide faster and more precise resource allocation. Similarly, providing customers from the type of transparent information AI is good at delivering can serve as a win-win for both parties.

Leveraging IoT Technology

The software dealing with IoT devices is complex, and many of the most popular tools rely on AI and machine learning to provide base functionality. However, AI can do so much more when fed data from IoT devices. Some IoT devices will inevitably fail, and AI algorithms can detect when a device is likely to break in the near future, allowing for preventative maintenance. Furthermore, AI can detect use modes that lead to bugs and other problems, enabling companies to request software fixes. Perhaps the most powerful technique, however, is to let AI roam free across IoT data and look for correlations. While correlated data doesn’t necessarily show causation, it’s often the case that unexplored metrics have more of an impact than initially imagined. Machine learning, in particular, can be great for finding unexpected knowledge buried within volumes of IoT-generated data, and it’s worth exploration.

Currently, AI today is far less human-like than experts predicted in the mid-20th century. But while we haven’t quite reached the singularity yet, hardware has become many orders of magnitude faster and cheaper, and the economic benefits of AI are making it a red hot field for research. While we’re just at the beginning of what many believe is an AI revolution, companies of all sizes need to take note of how the technology is advancing and how others are leveraging the technology. For C-level decision-makers who prefer to defer decisions to others, it’s time to personally take a closer look at what AI can do.

IoT’s Growing Security Challenge

Aerial view of a massive highway intersection at night in Tokyo, Japan

The Internet of Things holds tremendous promise. However, having so many connected devices presents significant security issues, and companies that fail to properly secure their devices place themselves at tremendous risk

In their latest report, entitled “The Internet of Things (IoT): A New Era of Third-Party Risk,” the Ponemon Institute found that companies are acutely aware of potential risks of IoT adoption. In total, 97 percent of respondents believed a data breach or cyberattack caused by an unsecured IoT device could be catastrophic. This comes in an era where IoT technology is growing rapidly and attacks are becoming more common. In fiscal year 2017, 15 percent of companies reported experiencing a data breach due to unsecured IoT devices; this number jumped to 21 percent in 2018. Similarly, cyberattacks caused by IoT devices rose five percent, from 16 to 21 percent.

Although awareness of the potential dangers posed by IoT devices in general, and third-party devices specifically, has risen significantly, security practices have been slow to increase. Only 29 percent of survey respondents claim to actively monitor IoT devices used by third parties. This stands in contrast to further fears respondents expressed, as 81 percent of respondents believe their company is likely to be a victim of a data breach caused by improperly secured IoT devices, and 82 percent believe IoT devices will lead to cyberattacks, including denial-of-service attacks.

IoT's Growing Security Challenge TechNative
© 2018 Ponemon Institute and The Santa Fe Group

In all, the report expressed a mixed state of affairs regarding third-party IoT risk and IoT security management in general. Although approximately two-third of those surveyed believe taking a strong tone regarding security from the top is important, 58 percent also stated that it is currently impossible to uncover whether IoT and third-party safeguards are appropriate. Furthermore, 53 percent of those surveyed rely on contracts, while 46 percent already have policies allowing them to disable devices that might pose a security threat. However, fewer than half can monitor compliance in these scenarios, which can increase the risk of data breaches and cyberattacks.

Perhaps most surprising was how frequently companies fail to inventory their IoT devices. Among the 56 percent of those surveyed who don’t keep an inventory of their IoT devices, 88 percent blamed the lack of centralized tools for managing IoT devices. In all, less than 20 percent of respondents are able to identify a majority of their IoT devices.

Finding in the report indicate that companies can improve their security by treating third-party IoT devices similarly to their internal devices. Although 50 percent of respondents report monitoring their internal IoT devices, only 29 percent monitor third-party devices. There were also signs of progress: The number of respondents who require third parties to identify IoT devices connected to their network increased from 41 percent to 46 percent.

So how do businesses offset the risk and ensure their adoption of IoT devices is secure? Properly defending the IoT requires using multiple technologies. Here are some of the most popular.

Traditional Networking Security

In some cases, tried and true solutions are still the best. IoT devices often connect through the public internet, and standard network security should be used wherever the IoT and public internet meet. Firewalls can prevent a broad class of attacks, and antivirus and antimalware products can determine if devices have been affected. One of the benefits of combining the IoT with the public internet is the maturity of security suites. Cisco, for example, has decades of experience crafting appropriate tools that are great options for securing IoT devices. Intrusion detection technology is especially useful, as it has the potential to detect attacks before they can cause harm.

Symantec EMEA CTO & VP Darren Thomson sees traditional security methods as “absolutely valid” in the age of IoT. Below he explains why security should be front of mind when making IoT buying decisions.


Again, tested technology is often the most effective, and encrypting data can prevent many popular attacks. All data should be encrypted as it’s sent over any network, even propriety and internal networks. However, it’s also worth encrypting data as its being used on the device itself, as preventing physical access to extensive IoT networks can be difficult. Not all data sent over IoT infrastructure is critical; sensor data relating to the temperature of hardware, for example, doesn’t present a risk. However, any data leaked to hackers can present further risks, so leave no data unprotected. Major vendors, including HPE, have extensive experience with using encryption. Symantec also offers a number of encryption tools.

Public Key Infrastructure Security

When it comes to control, PKI security is a step above other available options. Although it will take some time to set up, baking PKI security into an IoT installation will provide a degree of assurance unmatched by more ad hoc solutions to security. Furthermore, PKI technology is mature, and popular implementations, including X.509, have proven successful in critical environments. Fortunately, there are ample options for PKI security on IoT networks, and vendors are already familiar with the technology. DigiCert and Entrust Datacard are well known in the field, and large vendors including HPE and Symantec can provide proven solutions as well.

IoT's Growing Security Challenge TechNative


We sometimes fall into the trap of thinking IoT devices are simpler than they really are. Nearly all IoT devices are full-fledged computers capable of general purpose tasks, and their processors are often surprisingly powerful. The operating systems on IoT devices are substantial in size, and they’re likely to have bugs on occasion. Furthermore, the software they run can also potentially be hijacked. Because IoT networks are so large, releasing patches for all devices can be especially difficult, but it’s essential for preventing your infrastructure from becoming a zombie network. Make patching capabilities a key element of your network from day one to avoid potentially expensive problems down the line.

System Monitoring

The scalability of IoT technology accounts for much of its appeal, but it’s easy to lose track of just how large the network is and how many devices it contains. Because of this, visualization software has become a critical part of managing IoT networks. When picking tools for monitoring performance and visualizing the network, it’s worth picking up tools that also provide security feedback as well. Solutions from Kaspersky Lab, SAP, and others can use machine learning and other technology to detect potential threats and send alerts, letting your respond in a timely manner. IoT protection requires vigilance, and 24/7 monitoring is essential.

Focus on the APIs

Although lower-level security is key to designing a safe network, higher-level security is important as well. The most popular area to focus on is the APIs used to protect data while it’s in transit. Again, encryption plays an important role, but API-level security provides another means of ensuring data isn’t rerouted or compromised while in use. Furthermore, APIs provide a means for developers to make their software secure from the beginning, so programming mistakes, which are inevitable, are far less likely to lead to intrusions. There are ample options to choose from with picking an API, including options from Google, CA Technologies, and Akana.

The Internet of Things is transforming companies of all sizes, and this transformation shows no signs of slowing. With the opportunities provided by the IoT, however, comes a set of new obligations, as the many potential attack vectors on IoT infrastructure will prove tempting for malicious actors. Fortunately, there are plenty of options available for ensuring your IoT infrastructure is secure, and investing in security early on can allow you to reap the benefits of the IoT while keeping you critical data safe.

How Will Quantum Computing Impact Cyber Security?

3d render abstract technology background. Quantum macro zoom. High detailed patterns with red, green, blue colors blending with each other.

Quantum computing is not an incremental improvement on existing computers

It’s an entirely new way of performing calculations, and can solve problems in a single step that would take traditional computers years or even longer to solve. While this power is great in a number of fields, it also makes certain types of computer security techniques trivial to solve. Here are a few of the ways quantum computing will affect cybersecurity and other fields.

Today’s Security

Cryptography powers many of today’s security systems. Although computers are great at solving mathematical problems, factoring especially large numbers can be effectively impossible for even the most powerful computers, with modern algorithms requiring decades or even longer to crack. The nature of quantum computing, however, means that cryptography based on factoring numbers will be effectively useless.

Fortunately, many cryptography approaches in use today are designed to be safe from quantum computers that haven’t yet been built. Business, governments agencies, and other entities that place a high priority on security don’t necessarily need to switch to quantum-safe approaches just yet, but it’s important that organizations are able to make the transition promptly should quantum computing technology develop faster than anticipated. It’s also worth noting that other forms of security won’t be affected by quantum computing. Two-factor authentication, for example, will be just as effective.

Tomorrow’s Security

The basics on quantum computing sound almost unbelievable, but they’re based on well-established science and mathematics. Modern computers rely on discrete values; a bit is either a 0 or a 1. Quantum computers, on the other hand, are able to store both of these possibilities simultaneously in what are called qubits, and the value only truly forms when it is observed.

Combined with the equally baffling concept of quantum entanglement, which allow qubits to be bound no matter how far away they’re located, and quantum computing can open the door to cryptography techniques that are theoretically unbreakable. No matter how much computing power is dedicated to solving quantum-based security implementations, they’ll still provide a safe conduit to send data through. With certain implementations, keys used to encrypt data will instantly stop working if anyone attempts to uncover them, leading to inherent security.

Quantum Arms Race

The ability to defeat common security implementations makes quantum computers a goal for intelligence agencies. Anticipating their eventually invention, many intelligence agencies are believed to be intercepting traffic that can’t yet be cracked but that may be vulnerable in the future if it can be decrypted. The first agencies to gain access to quantum computing power will have a substantial edge on their counterparts in other nations, and news of quantum computing success will spur further investment in other nations.

Unlike the development of weapons, however, there are also commercial and academic interests in quantum computers, so developing an arms treaty seems unlikely. Furthermore, non-government entities can likely gain access to quantum computers as well, presenting even more risk for compromising data. These threats underline the importance of ensuring new security measures are able to handle existing computers as well as potential quantum computers.

Who Will be the First?

The first intelligence agency with access to a quantum computer will gain a significant edge, and the first company with quantum computers for sale will stand to gain tremendously. Some of the names are long-time staples, including IBM, which has made slow but steady progress toward quantum computing over the years and expects major advances during the next decade. Another big name is Microsoft. We recently spoke to their senior technologist Rob Fraser about the transformative impact of quantum computing.

While other companies are attempting to build quantum computers, IBM seems most likely to be the first to succeed.

However, it’s important to appreciate the role academia is playing. The concept of quantum computing is built on quantum mechanical theory, a field where typical hardware engineers have no experience. Contributions to the field have come from academic institutes with a strong history in technology, including MIT and Harvard. The perplexing nature of quantum mechanics, which is difficult to comprehend even for the world’s leading researchers, means that development will always be largely based on theory and not just engineering.

What to Expect

Although quantum computing can perform some tasks impossible or impractical on standard computers, they may never replace the typical computer architecture we’re used to. Quantum computers in development now are incredibly sensitive, and there doesn’t seem to be an engineering solution to this sensitivity. Furthermore, the calculations quantum computers excel at aren’t especially useful for standard computer tasks.

However, quantum computing will lead to scientific advances that can benefit society at large. Furthermore, they may play a role for internet infrastructure, potentially improving performance. Although there may not be a quantum computer in every home, the impact of quantum computing will be substantial if, much like quantum mechanics itself, unpredictable.

Toshiba Combines Edge Computing With Wearable Technology

Toshiba recently announced the release of new technology designed to combine the power of edge computing with assisted reality

Edge devices are typically viewed as self-contained computing device that offer little user interaction, but Toshiba’s portable dynaEdge DE-100 device combines with AR100 Viewer smart glasses to combine the power of edge computing with wearable technology.

Use Cases

Toshiba’s new technology offers excellent portability. The smart glasses are easy to wear, and the computing device is lightweight and simple to slip into a pocket. Although it’s being marketed as a general purpose device, Toshiba is placing an early emphasis on manufacturing and maintenance, fields that require workers to move around large areas. Logistics is a focus as well, as the camera, especially when combined with GPS, Wi-Fi, and Bluetooth capabilities, allows for better tracking and management compared to more traditional solutions. With up to 6.5 hours of battery life, this technology lasts for most of the day and can be swapped out if needed. Training is likely to be a popular use case as well, as the glasses can deliver information while keeping the user’s hands free.

We spoke to David Sims from Toshiba to find out more about the device.

Powerful Capabilities

There’s plenty of power in Toshiba’s new device. With a 6th generation Intel Core vPro processor, up to sixteen gigabytes of RAM, and 512 gigabytes of SDD storage, this portable device can perform complex tasks that go beyond the capabilities of many edge computing devices. Furthermore, it runs Windows 10, a first for an edge device of its type, proving excellent software compatibility. Skype for Business is supported, making the DE-100 a great device for communicating. Other Windows applications can be used as well, which might let some workers who were dependent on laptops use their smart glasses instead.

Also supported is the Toshiba Vision DE Suite, which allows frontline viewers to share files in addition to live video and images, making collaboration easier even across a large area. Toshiba’s partnership with Ubimax, a market leader providing enterprise software for smart glasses, makes their innovative product compelling for an ever deeper integration into a companies’ workflows.

1 2 3 6
Page 1 of 6