Have an app? You need to meet this GDPR requirement

Tom Colvin
5 min readJun 10, 2022

--

Photo by NeONBRAND

Every business has heard of the mighty General Data Protection Regulation (GDPR), which defines the set of rules which businesses must follow to look after their customers’ data. It has a UK counterpart, the UK GDPR, which is almost precisely the same (being drafted from the EU regulation), meaning that the same rules apply to European and UK businesses alike.

Businesses with Android, iOS or web apps have particular responsibilities under the GDPR. And these increase yet further when the app can collect or display Personally Identifiable Information (PII). PII is considered to be anything that can be used to identify an actual person, such as their name, email address or phone number, and it needs careful handling under GDPR.

This article is about the specific requirement of the GDPR, in Article 32, which requires companies to regularly test security. What does that entail for a company with an app, and how can you stay compliant?

What does the GDPR say?

Well, the GDPR is typically imprecise. Here is paragraph 1 of Article 32:

1. Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

(a) the pseudonymisation and encryption of personal data;

(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;

(c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;

(d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

(The formatting is my own).

For business with iOS, Android or web apps, it is the “process for regularly testing, assessing and evaluating” in 1(d) which conveys a particular meaning.

Likewise, the “level of security appropriate to the risk” seems on the surface somewhat woolly, but actually conveys a significant amount of hidden meaning.

So then, what kind of testing is expected?

Regularly testing: App penetration tests

The “regularly testing” clause requires you to test your organisation’s environment, infrastructure and software for security vulnerabilities.

For a company with an app, this means testing the app’s security, primarily to ensure that data cannot be leaked. The easiest and most effective way to do this is with an app penetration test.

An app penetration test involves ethical hackers probing the security of your app to see where flaws lie.

It answers questions like:

  • Is the app’s authentication system secure, or would it be possible to get other users’ data?
  • Is it possible to trick the servers into revealing or changing data they shouldn’t (so-called “injection” attacks)?
  • Is the data stored in the app correctly encrypted?
  • Would it be possible to monitor or disrupt communications to and from the app and the internet?
  • Have authentication factors like fingerprint or face scanning been implemented correctly?
  • Is the back end running on a suitably secure infrastructure?

Types of app penetration tests

There are different types of penetration tests:

Black box testing involves giving testers no privileged knowledge of your app, and asking them to find security flaws working from the ground up. This can be exciting — it’s hacking at its most brutal basic — but it can be very time consuming (and therefore expensive).

White box testing is more efficient. It involves giving testers a full understanding of the app’s architecture, communications and sometimes its code. This helps give testers a “leg up”, allowing them immediate access to information they would otherwise have to work for, and therefore consumes less of their time.

Often organisations find a happy medium between the two; so-called grey box testing. They give testers some salient information which makes them more efficient, but requires them to find out anything more by themselves, which makes it a more real-world test.

“Appropriate to the risk”

Article 32 of the GDPR requires “technical and organisational measures to ensure a level of security appropriate to the risk. What’s appropriate?

Practically all apps store data, and most store Personally Identifiable Information (PII) — i.e. data which can identify a particular user. This includes names, email addresses and phone numbers. The GDPR specifies that PII needs particularly careful handling.

Note that any app which has a log in (whether using email address, customer ID, username, etc.) contains PII.

Some apps store even more sensitive information, such as medical, legal or financial details. For these there is a much greater potential impact to the user in the case of a data breach. (Often particular industries, such as banking, have their own rules or codes of conduct which go further than GDPR in specifying what kind of handling is needed.)

“Appropriate to the risk” therefore means that the depth of testing should be proportional to the impact and likelihood of it being breached.

The depth of the penetration test performed should therefore be adjusted to this risk. For example, if the app handles sensitive data then testers should confirm that your app has 2-factor authentication, and that it has been securely implemented. (It’s easy to get this wrong!)

Why test regularly?

Article 32 of the GDPR is particular about testing regularly. This makes sense because the security landscape changes regularly as new flaws are discovered, and new ways to hack are invented.

Simply, an app which was secure when it was released may not be secure a year later.

So how regularly should you test? Primarily, your app’s development process should be capable of identifying new releases which could have an effect on security. Some releases are minor enough that they cannot, or are extremely unlikely to have an effect — and in that instance it would not be cost effective to re-test everything. Conversely, a major release should certainly be re-tested.

Outside of the release schedule, a good rule of thumb is to run penetration tests at least yearly; every 6 months is better, and high-risk apps should be tested quarterly or more.

Will penetration testing reveal all my app’s security flaws?

No. There is no way to make any app completely secure, and no way to find an exhaustive list of all the security problems in an app. Avoid anyone who tells you otherwise — they are snake-oil salespeople!

But, in terms of “bang per buck” penetration testing is a very efficient and effective way to provide both peace-of-mind and compliance.

To conclude

Article 32 of the GDPR requires that companies regularly test their security. For companies with iOS, Android and web apps this should include regular penetration testing.

Penetration testing involves ethical hacking attempts against your app, its back end and its infrastructure. It’s not guaranteed to find every possible issue, but it’s nonetheless effective, and the depth and breadth of testing can be customised to fit the risk.

Tom Colvin is CTO of Conseal Security, the mobile app security testing experts; and Apptaura, the app development specialists. Get in touch if I can help with any mobile security or development projects!

--

--

Tom Colvin
Tom Colvin

Written by Tom Colvin

Android developer / consultant; freelancer or through my agency Apptaura. Google Developer Expert in Android. tomcolvin.co.uk Articles 100% me, no AI.

No responses yet