Discussing Cybersecurity and Apple Infosec Research With Niels Hofmans of ‘ironPeak’

Last updated October 27, 2020
Written by:
Bill Toulas
Bill Toulas
Cybersecurity Journalist

Niels Hofmans, one of the researchers behind the recent revelations on the Apple T2 chip flaws, is talking with us about bug hunting, ransomware, cloud security, DDoS, and the way Apple moves to control its research community.

What drew you to the cybersecurity space, and what keeps you motivated to continue walking down that career path?

This already started when I was young, fascinated by the computers my father worked on as an IT freelancer. And like any other young kid at that age, I was particularly interested in gaming. The MMORPG I played with my friends was plagued by bots, which were hacked together by a community in Java or Turbo Pascal. Contributing to those and eventually writing my own was my intro to programming. This later evolved to malware since it was more challenging to develop programs that were ‘FUD’ (fully undetectable) and as small as possible. After hanging around for a year on a forum dedicated to malware programming, I ultimately got sucked in completely. After finishing my IT education in Networking & Systems Administration, I was very lucky to be able to jump straight into a cybersecurity career.

For sure, my main driver is the challenge: the cybersecurity landscape is constantly evolving, and you need to adapt to various threats while keeping up with the latest technology trends. No week is the same, and for someone who needs that extra challenge, it’s surely an exciting adventure.

Tell us a few things about ‘ironPeak’ and how you’ve decided to launch this new security platform.

After a couple of years in the field, it was apparent that demand for qualitative cybersecurity forces was at an all-time high. So I waited patiently for the right opportunity to pop up and switched over to a full freelance position. Still, the best decision I’ve ever made! I now use IronPeak as a brand to make Open Source Security Resources available to everyone, do research, and consult for various clients. I can now pick my own interests, which is definitely great!

How has the COVID-19 outbreak affected your operations or the type of cases you’re called to deal with?

Now, this is an interesting one. While people around me battled with issues like technical unemployment, demand for cybersecurity services has never been higher. Work from home has triggered new attack vectors due to companies needing to rapidly deploy services to support the large remote workforce, but new threats are also emerging to profit off these companies using, for e.g., specific spear-phishing campaigns. Or I guess everyone remembers the sudden attention directed at Zoom, resulting in the discovery and exploitation of a set of severe vulnerabilities.

Since cloud security is within your spectrum of services, what threat trends do you see now that the world has moved to remote working?

The compartmentation of functionality (or SaaSification as you might call it) is both a blessing and a challenge. Moving functionality to cloud-hosted variants reduces the operational risk vectors to the respective hosting company, which can focus on securing their own, which they can do best. Or off-prem hosting is still an improvement as well, for that matter, since it (mostly) forces you to adopt certain cloud concepts that most of the time already are an improvement compared to the on-prem situation. (cloud identity, identity-aware proxying, network segregation, runtime introspection, zero trust architecture, etc.)

However, adopting those cloud paradigms is a serious challenge since it evolved away (or should I say ahead) of the conventional on-prem topology and additionally needs to be translated into cloud vendor-specific terminology and solutions.

It’s kind of a paradox: while we are structurally moving to more overarching, generic solutions such as containerization (e.g., Kubernetes), service meshes, and microservices, we’ve never been more specific in our architecture.

We are currently witnessing a steep rise in DDoS and Ransomware attacks - and even their conjunction in some cases. As a security professional, do you see a light at the end of the tunnel? Is there a definitive way to stop either?

When talking about ransomware, the deadliest weapon in a company's armory is still user awareness, as it has always been. Sure, nowadays, we are able to further curb the threat using EDRs (Endpoint Detection & Response), secure default such as (finally!) disabling macros in documents, and patching client software. Active phishing awareness training is a tiny cost for a company & is easily won back in prevented damages.

Denial-of-Service attacks are a different matter, however. On the one hand, it’s never been easier to get behind a DoS scrubber, sometimes even free of charge and up to Layer 7. (e.g., Cloudflare, AWS Shield, etc.), but on the other hand, criminals can still find out your edge router addresses and hit you there, circumventing those security measures. Since companies can’t do away with their ASN or IPs so easily (even more so with the IPv4 shortage), that is a difficult matter to solve. I like the solution Cloudflare offers there with its reverse Argo tunnel, which could be combined with a rotating IPv6 NAT, but the issue is still the same if the company bought its own ASN. It’s never been more simple, albeit more complex.

You have recently uncovered a nasty and unfixable flaw in Apple’s T2 security chip. What are the real repercussions of this security bug for regular macOS users who use vulnerable devices?

First of all, let’s make clear the real groundbreaking work here was done by the t8012 team (t8012 is a reference to the Apple T2 chip). The T2 chip is the root of trust on your recent iOS and macOS devices, managing disk encryption (Filevault2), TouchID and FaceID, encryption keys, and verifying your macOS installation. The recent vulnerability allows for others with physical access (or a malicious accessory) to hijack this chip and circumvent any security checks done there, for e.g., capturing keystrokes, brute-forcing your password or encrypted disk, disabling TouchID and FaceID verification, bypassing Find My or Activation Lock, bypassing Mobile Device Management, etc.

It might not sound like much for the average Apple user since it requires physical access, but it’s a big deal for persons of interest who are specifically being targeted by malicious actors. Or imagine that once your iOS or macOS device is stolen, they can now read your data and prevent you from ever finding out where it is residing.

In your opinion, should Apple and hardware vendors, in general, be obliged to replace intrinsically insecure devices when it’s impossible to fix them via a software patch?

I have two major issues with the current situation:

a) Apple was clearly aware of the vulnerability on its iOS devices way ahead of publication/time but decided not to tackle the issue on its laptops, while it should have known about the implications. Instead, they gambled on luck, hoped nobody would notice, and continued the production of their 2020 Macs until the new ARM Silicon arrives. I’m hoping at least the new Intel iMac at the end of this year will contain a T3 chip.

b) The second gripe is the lack of communication. You would think that a company that uses security & privacy as big marketing terms would react faster to the situation. After trying to contact them for over a month over various direct channels, I saw no other way than to bring this to light in a public blog post. Mind you, I did not use the Responsible Disclosure Program but the security team email channel.

What would you say could be the main reason why a large company that bothered to develop a chip specifically focused on keeping things secure failed? Is the existence of vulnerabilities in everything inevitable?

Difficult to say without peeking in Apple offices right now, but completely eradicating vulnerabilities is practically impossible. How you handle those, however, and what you do to prevent them from reoccurring is key. In itself, the re-use of in-house chips makes sense because you test the waters and expand your expertise on the matter, thus discovering vulnerabilities internally or (in this case) externally.

In this context, are security chips generally a good idea in the sense that they can potentially serve hackers as a door to highest-privilege access?

Segregated security chips actually have been around for a long time, albeit we know them as much simpler forms: HSMs. (Hardware Security Module). A lot of motherboards & devices nowadays sport these to bind authentication to hardware and make it very difficult to extract keys. It is only logical that cramming a complete operating system and feature set in a separate chip greatly widens the attack surface of the chip. The dumber/simpler a device is, the fewer vulnerabilities it can contain.

But does that mean they should have separated every concern in a separate chip? Of course not, since that would only duplicate the issue. Let’s not forget the main reason Apple began sticking functionality in the T2 is to test the waters for their own Silicon.

As an independent researcher and bug hunter, how do you see Apple’s decision to raise walls around its security research program, essentially excluding Google’s Project Zero and anyone else who isn’t willing to accept the new disclosure policy?

This is an odd one, of which I am not quite sure. While it is a positive change that they are moving towards a (kind of) more open security model with handing out test devices & responsible disclosure, I, however, feel the only reason they are doing this is that they feel pushed to do so. Taking up with Corellium only makes sense if you supply an alternative, so you don’t trigger a mass reaction.

The reason why they reject the 90-day disclosure policy is, I believe, only because of one reason: they want to maintain their control.

“Apple will work in good faith to resolve each vulnerability as soon as practical.” I would be very happy if they would turn around and respect this statement, though.

On the same page, Apple is undergoing a legal fight with iOS image vendor Corellium. From your perspective, should security on iOS rely on open channels and tools like those provided by Corellium, or is Apple’s approach totally justified?

While Corellium has done something truly amazing, I can fully understand Apple’s point of view. They did illegally reverse engineer and/or copy proprietary software from Apple and are still actively reselling it as a service to their broad customer base (pending a court ruling). Not much to say here, except that we shouldn’t forget certain exploit development companies use Corellium and don’t ever report vulnerabilities back to Apple.

If you were to share one piece of security advice with our readers, what would that be?

Awareness is key. If you don’t ask the question, you won’t even bother with the answer.



For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: