Smartwatches – Spying on Kids

German Telecoms regulator the Federal Network Agency has banned the sale of smartwatches to children and asked parents to destroy any that they already have.

Danger To Children – Spying and Tracking

The reason why the regulator has taken the step is over concerns that children wearing the watches could be, in theory, spied upon and tracked. These risks have been identified because the watches are internet-connected and are thought to be poorly secured e.g. no encryption of any transmitted data. This could mean that they could be hacked and taken over, and also the GPS tracking in the watches could be used by unauthorised persons to track the child.


Smartwatches like the ones that have been banned in Germany are generally aimed at children aged between five and twelve, and this could be considered to be a demographic that is particularly vulnerable if data from the watches fell into the wrong hands.


Smartwatches have a Sim card, limited telephony function, and are linked to an app. Parents can use the app to access their child’s smartwatch, and thereby listen to what is happening in the child’s environment, and it has been reported that the German Federal Network Agency has evidence that parents have used this feature to listen to teachers in the classroom. This ‘unauthorised transmitting’ and the surrounding privacy concerns have led to schools being warned to be on the lookout for the watches.

Similar Case In Norway

This is not the first time that concerns have been raised about the security and privacy aspects of smartwatches. Back in October, the Norwegian Consumer Council (NCC) reported that some children’s watches had flaws such as transmitting and storing data without encryption. Among the dangers identified were concerns that watches could have been hacked using basic techniques and the (child) wearer could have been tracked, or made to appear to be in a different location.

Internet-Connected Gifts / Toys Fear

Only last week there were news reports that Consumer watchdog Which? identified toys such as Connect, the i-Que robot, Cloudpets and Toy-fi Teddy as having a security vulnerability because no authentication is required, and they could be linked with via Bluetooth.

Also in the US, back in July this year, the FBI issued an urgent announcement describing the vulnerability of internet-connected toys to such risks, explaining steps to take to minimise the threat. The main concern appeared to be that young children could tell their toys private information, thinking they’re speaking in confidence. This information could be intercepted via the toy, thereby putting the child and family at risk.

What Does This Mean For Your Business?

Many tech and security commentators agree that a lot more care needs to be taken by manufacturers of Internet-connected / smart toys, gifts, and other home and business products to make sure that they are secure when they are sold, and that any information they do transmit is encrypted.

It is very worrying that, children particularly, may be at risk now due to vulnerabilities in smart toys. There have been many occasions in recent years when concerns about the security / privacy vulnerabilities in IoT / smart products have been publicly expressed and reported. The truth is that the extent of the current vulnerabilities are unknown because the devices are so widely distributed globally, and many organisations tend not to include them in risk assessments for devices, code, data, and infrastructure. Home / domestic users have no real way of ascertaining the risks that smart / IoT devices pose, probably until it’s too late.

It has also been noted by many commentators that not only is it difficult for businesses, including manufacturers of smart products, to ascertain whether all their hardware, software, and service partners are maintaining effective IoT security, but there is also still no universal, certifiable standard for IoT security.

For businesses, it’s a case of conducting an audit and risk assessment for known IoT devices that are used in the business. One basic security measure is to make sure that any default username and passwords in these devices are changed as soon as possible. For home users of smart products, who don’t run checks and audits, it appears that others (as in the case of the German Federal Network Agency) need to step in on their behalf and force the manufacturers to take security risks seriously.

Your Keystrokes Being Tracked

A new study from Princeton University has suggested that your keystrokes, mouse movements, scrolling behaviour, and the entire contents of the pages you visit may be tracked and recorded by hundreds of companies.


The study revealed that no fewer than 480 websites of the world’s top 50,000 sites are known to have used a technique known as ‘session replay’, which, although designed to allow companies to gain an understanding of how customers use websites, also records an alarming amount of potentially dangerous information.

The researchers found that companies are now tracking users individually, sometimes by name.

The Software

The session replay software offered by seven firms, and detected in the study was FullStory, SessionCam, Clicktale, Smartlook, UserReplay, Hotjar and Yandex.

The research showed that companies using the software (on 492 sites) were sharing information about individuals with one or more of the seven replay companies, and that the percentage of sites giving information to the software companies was likely higher, because the software companies only track just a sample and not the total of visits to a website.

Companies Using The Software

As indicated in the research, some companies believed to be using session replay software include the Telegraph website, Samsung, Reuters, Home Depot (US retailer) and CBS News.

What’s The Risk?

As pointed out by the researchers, this kind of software is like someone looking over your shoulder, and that the extent of the data collected may far exceed user expectations, without any visual indication to the website visitor that such monitoring is taking place.

Security commentators have noted that among the general browsing data collected by these third-party replay scripts, they are also capable of collecting some very sensitive and personal information e.g. medical conditions and credit card details. Depending on how this data is transmitted and stored (where and how securely?) this could expose people to risks such as identity theft and online scams.

The research also raised the question of whether state-sponsored surveillance is being carried out with session replay software, when it was noted that Yandex (one of the session replay software companies) is also Russia’s largest search engine.

What Does This Mean For Your Business?

Creeping surveillance and monitoring for multiple purposes is now part of our daily lives and includes e.g. CCTV, monitoring / surveillance of behaviour and Internet use at work, tracking via our mobile phones, EPOS / supermarket recording of our purchases, storage of our browsing history as part of the Investigatory Powers Bill / ‘Snooper’s Charter’, social media monitoring, and government attempts to gain back-doors into and stop end-to-end-encryption of popular platforms like WhatsApp.

Keystroke monitoring in itself is nothing new, but the difference now is that cyber-crime is at a high, data protection has become a more public issue with data breach reports in new regulations on the way in (GDPR), and the fact that the latest session replay software is capable of recording so much detail including our most sensitive data and interests.

For businesses, session replay software could be an asset in understanding more about customers and making marketing more effective and efficient. As consumers, we could be forgiven for having cause for concern, and with things like ad-blockers only capable of filtering out only some replay scripts, we remain somewhat vulnerable to the risks that they may pose.

57 Million Data Breach Concealed By Uber – Hackers Paid

It has been reported that Uber concealed a massive data breach from a hack involving the data of 57 million customers and drivers, and then paid the hackers $100,000 to delete the data and to keep quiet about it.

More Than Two Years Ago?

Reportedly, the hacking of ride-hailing service Uber’s stored data took place more than two years ago. Instead of reporting the breach to regulators and going public with the news, Uber are now accused of concealing the breach.

What Actually Happened?

Reports indicate that back in 2016, two hackers were able to access a private GitHub coding site that was being used by Uber software engineers. Using the login details obtained via the GitHub, the attackers were able to go to the Amazon Web Services account that handled the company’s computing tasks and access an archive of rider and driver information. This information is believed to have been stolen by the hackers, and the hackers are then reported to have emailed Uber asking for money.

Hackers Paid

Almost as shocking as Uber keeping quiet about the breach for 2 years or more is their reported decision to pay the hackers $100,000 to delete their copy of the data, and to keep quiet about the breach. At the time of the hack, in November 2016, Uber was negotiating with U.S. regulators (Federal Trade Commission) who were investigating separate claims of privacy violations by the company and Uber had just settled a lawsuit with the New York attorney general over data security disclosures.

Kalanick and Sullivan

Uber’s former CEO, Travis Kalanick, who was ousted from the role earlier this year (but remained on the board), is reported to have known about the breach a month after it took place.

Joe Sullivan, outgoing security chief, also appears to be somewhat in the frame over how the hack was handled, as it was only when Uber’s board commissioned an investigation into the activities of Sullivan’s security team (by an outside law firm) that the hack and the failure to disclose it was discovered.

What Kind of Data Was Stolen?

Reports indicate that within the 57 million names, email addresses and mobile phone numbers stolen, 600,000 drivers had their names and licence details / drivers licence numbers exposed. This has led to drivers now being offered free credit monitoring protection.


Unfortunately, this is not the first time that poor practice has been uncovered in how Uber deals with data. For example, the U.S. has opened at least five criminal probes into the company’s activities around data, which is in addition to the multiple civil lawsuits that the company faces. The UK government has also looked at banning the service on the grounds of alleged reckless behaviour (thus losing its London licence in September).

What Does This Mean For Your Business?

How companies store and handle data is, in today’s society, important to consumers, and to governments. The introduction of GDPR next year and the potentially severe penalties for businesses / organisations that don’t comply is evidence of how Europe and the UK are determined to force businesses / organisations to be more responsible, transparent, and follow practices that will ensure greater security. If companies really want to destroy their reputation and brand and risk being closed down, there are few better ways than [a] having a significant data breach (or being a repeat offender), and [b] failing to disclose that breach until being forced to do so.

Uber joins a line of well-known businesses that have made the news for all the wrong reasons where data handling is concerned e.g. Yahoo’s data breach of 500 million users’ accounts in 2014 followed by the discovery that it was the subject of the biggest data breach in history back in 2013. Similar to the Uber episode is the Equifax hack where 143 million customer details were stolen (44 million possibly from UK customers), while the company waited 40 days before informing the public and three senior executives sold their shares worth almost £1.4m before the breach was publicly announced.

This story should help to remind businesses how important it is to invest in keeping security systems up to date and to maintain cyber resilience on all levels. This could involve keeping up to date with patching (9 out of 10 hacked businesses were compromised via un-patched vulnerabilities), and should extend to training employees in cyber-security practices, and adopting multi-layered defences that go beyond the traditional anti-virus and firewall perimeter.

Companies need to conduct security audits to make sure that no old, isolated data is stored on any old systems or platforms, and no GitHub-style routes are offering cyber-criminals easy access. Companies may now need to use tools that allow security devices to collect and share data and co-ordinate a unified response across the entire distributed network.

The reported behaviour of Uber is clearly poor and likely to inflict even more damage on the reputation and brand of the company. The hack is also a reminder to businesses to maintain updated and workable Business Continuity and Disaster Recovery Plans.

Prison Sentences Demanded For Unauthorised Data Usage

The Information Commissioner’s Office (ICO) has said that it backs the idea that anyone accessing personal data without a valid reason or without their employer’s knowledge is guilty of a criminal offence, should be prosecuted, and prison sentences should be an option.

Recent Case

A recent case involving a nursing auxiliary at Newport’s Royal Gwent Hospital has re-ignited the ICO’s calls to get tough on personal data snoops. In the case of 61-year-old Marian Waddell of Newport, she was found to have accessed the records of a patient who was known to her, on six different occasions between July 2015 and February 2016, without having a valid business reason to do so and without the knowledge of the data controller (at the Aneurin Bevan University Health Board). The data controller is the person who (alone or jointly or in common with other persons) who determines the purposes for which and the manner in which any personal data is to be processed.

In this case, Nursing auxiliary Waddell was found guilty of a section 55 offence (of the 1988 Data Protection Act) and was fined £232, ordered to pay £150 costs, and was ordered to pay a £30 victim surcharge.

Fines … For Now

Section 55 offences of this kind are currently only punishable by fines, and such fines and costs have totalled £8,000 this year for nine convictions.

Section 55 of the Data Protection Act 1998 refers to the unlawful obtaining etc. of personal data, and it states that “a person must not knowingly or recklessly, without the consent of the data controller – obtain or disclose personal data or the information contained in personal data, or – procure the disclosure to another person of the information contained in personal data.”

The ICO, however, would like to see tougher penalties for data snooping. For example, a blog post by ICO enforcement group manager and head of the ICO’s criminal investigations team, Mike Shaw, highlighted the fact that offenders not only face fines, payment of prosecution costs, but could also face media (Internet) coverage of their offences, and damaged future job prospects. Mr. Shaw also stated that the ICO would like to see custodial sentences introduced as a sentencing option for the courts in the most serious cases.

Not Just An NHS Problem

The ICO have been quick to point out that data snooping and convictions for doing so are not confined to the NHS. Prosecution cases this year have also been brought against employees in local government, charities and the private sector.

Motives for data snooping vary, from sheer nosiness to seeking financial gain.

What Does This Mean For Your Business?

With GDPR soon to be introduced and with the ICO now pushing for possible prison sentences for certain data offences, businesses now need to (if they haven’t done so already) make data protection and compliance with data protection law a priority. This story is should remind anyone in any business or organisation that, if you have access to personal data, that data is actually out of bounds to you unless you have a valid and legal reason for looking at it.

Businesses can help to make all staff aware of the rules and regulations for handling and processing data through staff training and education.

New, Free Secret Browsing and Cyber Security Service

Quad9 is a new, free service that will allow users to keep their Internet browsing habits secret and their data safe from malicious websites, botnets, phishing attacks, and marketers.

What’s The Problem?

When you browse the Internet, your Domain Name System (DNS) is likely set to whatever your ISP would like it to be (unless you have changed it). DNS services monitor your traffic data, and this information is often resold to online marketers and data brokers. We all face the security threat of unknowingly visiting domains that are associated with things like botnets, phishing attacks, and other malicious internet hosts. Many businesses also have to go to the trouble of running their own DNS blacklisting and whitelisting services.


The new Quad9 free public Domain Name Service (DNS) system addresses all of these threats. The service promises not to collect, store, or sell any information about your browsing habits, thereby freeing the user from receiving even more unwanted attention from marketers in the future.

Also, a large part of the value of the service is that it will block domains associated with botnets, phishing attacks, and other malicious internet hosts, and relieve businesses of the need to maintain their own blacklisting and whitelisting services.

How Does It Work?

The Quad9 system, so-named because of its Internet Protocol address, draws upon IBM X-Force’s threat intelligence database which is made up of 40 billion+ analysed web pages and images. The Quad9 service also draws upon 18 other threat intelligence feeds including, the Anti-Phishing Working Group, Bambenek Consulting, F-Secure, mnemonic, 360Netlab, Hybrid Analysis GmbH, Proofpoint, RiskIQ, and ThreatSTOP.

Quad9 uses its intelligence feeds and database to keep an updated whitelist of domains never to block, using a list of the top one million requested domains. It also keeps a “gold list” of safe providers e.g. Microsoft’s Azure cloud, Google, and the like.

Amazon Web Services

All of this means that, when a Quad9 user browses the Internet and visits a website, types a URL into a browser, or follows a link, Quad9 checks the site against its databases and feeds to make sure its safe. If it isn’t safe, access to it will be blocked, thus protecting the users from possible security threats.

Not For Profit

The Quad9 service is the result of a non-profit alliance between IBM Security, Packet Clearing House (PCH), and The Global Cyber Alliance, an organisation founded by law enforcement and research firms.

What Does This Mean For Your Business?

This service offers businesses another useful and free tool in the fight to maintain cyber security and resilience in an environment where threats seem to be around every corner. This service has some credible contributors with serious critical mass, and has a presence in over 70 locations across 40 countries, with plans to double its global presence over the next 18 months. This means that Quad9 could add real value to business efforts to deter threats that can come from anywhere in the world. It could also save businesses the time and trouble, and extra risk of having to compile their own (often inadequate) blacklisting and whitelisting services, and can help businesses to defend themselves from evolving threats. This kind of service also helps protect against all-too-common human error by blocking threats automatically.

Businesses hoping to use the service simply need to change the DNS settings in their device or router to point to Installation videos and guides are also available online.

Tech Tip – Face ID: Unlock Your Phone With Your Face

Although not the same as Apple’s Face ID system, Android devices come with an in-built Face ID feature to give you extra security and to enable you to unlock your device with your face. Here’s how to use it:

  • Go to the device settings > Security settings.
  • Click on Screen Lock.
  • Select the Secure Lock method.
  • Tap on the Smart Lock option and enter your PIN, Password or the Pattern whichever you feel to be easy. Select your choice.
  • Skip through the next page with the ‘Got it’ button.
  • Next page, click on Trusted Face.
  • Follow the Instructions to set up the face ID and then make the ID of your face through using the Camera of your device.

Xmas Toys – Security Concerns

With Christmas just around the corner, consumer watchdog Which? has asked retailers to stop selling some popular internet-connected toys which have “proven” security issues that could allow attackers to take control of the toy or send messages.

Toys At Risk

Consumer watchdog Which? has identified toys such as Connect, the i-Que robot, Cloudpets and Toy-fi Teddy as having a security vulnerability because no authentication is required, and they could be linked with via Bluetooth.

Children At Risk

The main worry is that children and the privacy / security of all members of a household could be put at risk because manufacturers have cut costs, been careless, or rushed their products to market without building-in adequate protection against taking over / hacking and reverse engineering e.g. to conduct surveillance.

Toy Makers Say

In the light of the Which? research, Hasbro, the manufacturer of Furby Connect has pointed out that it would take a large amount of reverse-engineering of their product, plus the need to create new firmware for attackers to have a chance to take control of it.

Vivid Imagination, which makes I-Que is reported as saying that although it would review Which?’s recommendations, it is not aware of any reports of these products being used in a malicious way.

Old Fears

The idea that a toy could pose a security risk in this way dates back to 1998, when a small robot ‘Furby’ was banned by the US National Security Agency.

Also in the US, back in July this year, the FBI issued an urgent announcement describing the vulnerability of internet-connected toys to such risks, explaining steps to take to minimise the threat. The main concern appeared to be that young children could tell their toys private information, thinking they’re speaking in confidence. This information could be intercepted via the toy, thereby putting the child and family at risk.

Other Types of ‘Toy’

There was also news this week that Hong Kong-based firm Lovense had to issue a fix to the app in its remote (Bluetooth) controlled sex toy (vibrator) after a Reddit user discovered a lengthy recording on their phone which had been made during the toy’s operation.

This prompted more concerns about where the audio files (recorded via a user’s smartphone microphone) are being stored. The company is reported as saying that the audio files are not transmitted from the device, and that problem was caused by “a minor bug” limited to Android devices, and that no information or data was sent to its servers.

Not The First Time

This is not the first time that concerns have been raised about IoT sex toys. Back in March, customers of start-up firm Standard Innovation, manufacturers of IoT ‘We-Vibe’ products, were left red-faced and angry after the company was judged by a court to have been guilty of covertly gathering data about how (and how often) customers used their Wi-Fi enabled sex toy.

What Does This Mean For Your Business?

These reports have re-ignited old concerns about the challenge of managing the security of the many Internet-connected / smart / IoT devices that we now use in our business and home settings.

Where businesses are concerned, back in July 2016 a Vodafone survey showed that three quarters of businesses saw how they use the Internet of Things (IoT) as being a critical factor in their success. Many technology commentators have also noted that the true extent of the risks posed by IoT device vulnerabilities are unknown because the devices are so widely distributed globally, and large organisations have tended not to include them in risk assessments for devices, code, data, and infrastructure.
It has also been noted by many commentators that not only is it difficult for businesses to ascertain whether all their hardware, software, and service partners are maintaining effective IoT security, but there is also still no universal, certifiable standard for IoT security.

Businesses, therefore, may wish to conduct an audit and risk assessment for known IoT devices that are used in the business. One basic security measure is to make sure that any default username and passwords in these devices are changed as soon as possible.

Security experts also suggest that anyone deploying IoT devices in any environment should require the supply chain to provide evidence of adherence to a well-written set of procurement guidelines that relate to some kind of specific and measurable criteria.

Microsoft has also compiled a checklist of IoT security best practice. This highlights the different areas of security that need to be addressed by the organisations involved throughout the lifecycle of an IoT system e.g. manufacturing and integration, software development, deployment, and operations.

Huddle Leaked Business Documents

A flaw has been discovered in the collaboration tool Huddle that is believed to have left private company documents able to be viewed by unauthorised persons.

What is Huddle?

Huddle is cloud-based and ‘secure’ software system for collaborative work, file sharing and project management. It can be accessed through mobile and desktop apps, and can be integrated with enterprise tools such as Microsoft Office, Google Apps for Work, SharePoint and

Used By Government Agencies

What makes this recent discovery more worrying and embarrassing is the fact Huddle publicly claim that more than 80% of UK Central Government agencies use the Huddle system and that it has administrative, technical and physical safeguards, and yet a simple login flaw appears to have exposed clients to potentially serious security risks.

What Happened?

The security flaw is reported to have been discovered by a journalist who tried to log in and access a shared diary for their team, but was instead logged in to a KPMG account, and was able to view a directory of private documents and invoices, and an address book.

Huddle also discovered later that an unauthorised person (unknown) had accessed the Huddle of BBC Children’s programme Hetty Feather, but had not opened any of the private documents.


Huddle’s reported explanation of the problem is that because two users arrived at the login server within 20 milliseconds of each other they were both given the same authorisation code. This duplicate code was then carried to the security token process, and whoever was fastest to request the security token was logged in to the system, and was therefore able to see another company’s files.


A statement from Huddle appeared to play down the seriousness of the discovery by pointing out that the bug had only affected six sessions out of 4.96 million log-ins between March and November.

Now Fixed

Huddle users will be relieved to hear that Huddle has now fixed the bug by making sure that a new authorisation code is generated every time the system is invoked.

What Does This Mean For Your Business?

The important point for businesses to take away from this story is that even trusted, popular, market leading 3rd party systems are likely to have some undiscovered bugs in them – no system is perfect, and the chances of them being discovered and exploited are very small. It is also a good (and lucky) thing that a responsible person (the journalist) discovered and reported the bug so that it has now been fixed.

Critics, however, have highlighted the fact that it is surprising and worrying that a global leader in secure content collaboration that is supposed to offer a world-class service, and publicises how its system is trusted with sensitive government information could have its system so easily compromised, without the need for any hacking or illegal activity.

For the companies whose details have been accessed, it’s unlikely to be the rarity of such an event that concerns them, but more the fact that they trusted a 3rd party with their company security, and have suffered a potentially damaging breach as a result. It is also likely to damage trust in the Huddle service, raise questions about how rare such an event really is, and tempt some companies to switch suppliers, or to perhaps to use the system for less sensitive projects.

Google’s Scary Hack Stats

With more than 15% of Internet users reporting takeovers of their email or social networking accounts, new research by Google and the University of California, Berkeley has shed light on how passwords are stolen and how accounts are hacked.

Tracking Black Markets

The research, which took place between March 2016 and March 2017, and focused on password stealing tactics, tracked several black markets that traded third-party password breaches, as well as 25,000 blackhat tools used for phishing and keylogging.

This tracking identified a staggering 788,000 credentials stolen via keyloggers, 12 million credentials stolen via phishing, and 3.3 billion credentials exposed by third-party breaches.


Google’s summary of the research was that enterprising hijackers are constantly searching for, and are able to find, billions of different platforms’ usernames and passwords on black markets. This means that many of us are (unknowingly) at risk of suffering a takeover of our accounts.

For example, the research found that 12% of the exposed records included a Gmail address serving as a username and a password, and, of those passwords, 7% were still valid due to reuse.

Google Accounts – Targeted By Phishing and Keyloggers

The research showed that phishing and keyloggers frequently target Google accounts, and that 12-25% of attacks of their attacks yield a valid password. In fact, Google concluded that the 3 greatest account takeover threats are phishing, followed by keyloggers, and finally third-party breaches.

Password Alone Not Enough

With greater security being applied to many different types of accounts e.g. two-factor verification and security questions, the research acknowledged that a password is rarely enough to gain access to e.g. a Google account. This explains why attackers now have to try to collect other sensitive data, and the research found evidence of this in the 82% of blackhat phishing tools and 74% of keyloggers that now attempt to collect a user’s IP address and location, and in the 18% of tools that collect phone numbers and device makes and models.

What Does This Mean For Your Business?

It is worrying for all businesses that so much information and so many hacking tools are available to criminals on the black market, and that attackers are becoming more sophisticated in their methods.

It is good, however, that Google has made a serious attempt with the research to understand the scale, nature, and sources of the risks that their customers face. The real value to businesses will come from Google and other companies using the findings of the research to tighten account security, close loopholes, and try to keep one step ahead of cyber-criminals. Google has, for example, stated that it has already applied the insights to its existing protections with Safe Browsing now protecting more than 3 billion devices (alerts about dangerous sites / links), monitoring account logins for suspicious activity and requesting extra verification where needed, and regularly scans of activity across Google products. Google states that the scanning of its products enables it to prevent or undo actions attributed to account takeover, notify the affected user, and help them change their password and re-secure their account into a healthy state.

Google’s 2 key pieces of advice to customers to help prevent account takeover are to:

  1. Visit Google’s ‘Security Checkup’ to make sure you have recovery information associated with your account, like a phone number.
  2. Allow Chrome to automatically generate passwords for accounts and save them via Smart Lock.

1 In 4 Law Firms Ready For GDPR

A report by managed services provider CenturyLink Emea, shows that despite the threat of up to €20m fines or 4% of annual global turnover for serious data protection failings, only 25% of more than 150 legal sector IT decision-makers said their firms were GDPR ready.

Why Not?

If any sector looks likely to be prepared for the introduction of GDPR next year, you could be forgiven for thinking that the legal sector would be at the forefront, given that companies and individuals will be seeking the advice, help and services of law firms with compliance and enforcement matters.

According to the report, however, the legal sector is saying that three quarters of law companies are not ready, and not achieving higher levels of privacy and data security because of challenges relating to human mistakes (50%), dedicated cyber attacks e.g. distributed denial of service (DDoS) attacks and ransomware or SQL injection (45%), and lost documentation and devices (36%).

The report shows, for example, that 1 in 5 law firms have experienced an attempted cyber attack in the past month, and less than one-third (31%) of IT directors believe their firm is compliant with cyber-security legislation.

Shadow IT Worries

One other interesting area of confusion for law firms appears to be Shadow IT. This term describes the apps and services that employees bring in to company systems without going through the approved channels, and how employees use them in their own way to solve specific work problems. Many companies see it as a threat to control, security and the strategy of the business as well as being strength in some situations.

The CenturyLink Emea report shows that 11% of law firms have no shadow IT policies at all, and although one-third (33%) of firms don’t officially permit bring your own device (BYOD) or bring your own apps (BYOA), in reality 43% of IT decision-makers at law firms trust their IT teams to “do the right thing” for their business.

Not The First Negative GDPR Report

This is certainly not the first GDPR report with less than positive news. Only last month, a study by DMA group (formerly the Direct Marketing Association) revealed that more than 40% of UK marketers said their business is not ready for changes in the forthcoming General Data Protection Regulation (GDPR). One of the main issues highlighted in that report was confusion over issues of consent in GDPR. Some commentators have said that focusing too much on consent as a basis for data collection could mean that companies miss other options and issues, and end up not being ready and compliant in time.

What Does This Mean For Your Business?

The findings of this report are surprising in some ways, partly because in September last year, media reports indicated that the legal profession was already preparing itself for the introduction of GDPR in terms of how to build a market for litigation as well as ensuring that they fully understand the many different aspects of the Regulation and its implications. It appears, however, that legal firms are experiencing the same challenges many other companies in other sectors. To some extent, the news that law firms are apparently not up to speed with GDPR is likely to be somewhat of a relief to many businesses.

Law companies also face an added risk to their reputation e.g. if they are hacked and there is a data breach due to non compliance. This is the reason why many law firms and other companies are now taking steps towards greater security by moving away from legacy, on-premise IT systems to private or public managed cloud arrangements. Outsourcing IT infrastructure to providers can offer a secure environment to support digital transformation initiatives, and managed services can minimise the risk posed by external attacks, and free up internal resources to focus on innovative IT and business initiatives.

With GDPR, one of the key challenges for all companies in addition to getting an understanding of consent issues is making sure the technology is in place to help deal with data in a compliant way. Some technology products are now available to help deal effectively with data, and many tech commentators believe that developments in AI and machine pattern learning / deep learning technologies will be able to be used by companies in the near future to help with GDPR compliant practices.

At this late stage, legal firms and those in other sectors clearly need to press on quickly with, and get to grips with GDPR and its implications. Ordinarily, one piece of advice for companies would be to seek professional advice to at least highlight which areas are most legally pressing, but in the light of this report, it seems that some law firms may be struggling to see how GDPR applies to themselves, let alone their customers.