Alright, let's do this. Same deal as The Month in Linux, this aims to be my attempt at highlighting some of the interesting news, takes and contributions in security over the last month or so and boy is this a spicy one. As always, feel free to let me know if I've missed anything cool!
On June 29th Github unveiled Copilot, their code-completion tool which aims to become your AI pair programmer; trained on billions of lines of public code. What could go wrong?
Shortly after the unveiling, invites to Copilot's technical preview rolled out and it wasn't long before questions were raised regarding bad code, copyright concerns and even secrets being leaked.
Github addressed some concerns saying "about 0.1 per cent of the time, the suggestion may contain some snippets that are verbatim from the training set". Regarding secrets being leaked they explained many are fictitious and that "the training data is all public code (no private code at all) so even in the extremely unlikely event a secret is copied, it was already compromised".
However concerns over copyright and attribution are still being debated, with some questioning whether training AI models falls under "fair use" even when used to train a paid-for software which in turn could then be used to write more proprietary software.
Google TAG on 0-days
On July 14th Google's Threat Analysis Group (TAG) published an article "How we protect users from 0-day attacks" sharing some cool details of 4 different 0-days (in Chrome, IE and Safari's WebKit) used across 3 different campaigns.
The WebKit 0-day (CVE-2021-1879) they touch on was, according to Google & Microsoft, used by the "Russian state hackers who orchestrated the SolarWinds supply chain attack last year [to exploit] an iOS zero-day as part of a separate malicious email campaign aimed at stealing Web authentication credentials from Western European governments".
15 Year Old Netfilter Vuln
Sounding familiar? Yes, I already wrote about this in The Month in Linux. Sue me!
On March 30th Google security engineer @_tsuro announced the 1.0 release of kCTF, a Kubernetes based infrastructure for CTFs. Two days later?
A few months later, Andy Nguyen has published an awesome write-up for how he used a 15 year old heap out-of-bounds write vulnerability in Netfilter to bypass SMAP, KASLR and break the Kubernetes pod isolation.
The Pegasus Project
I mean, where to begin with this one? On July 18th an international story broke of an investigation between 17 media organisations into a massive data leak from Israeli surveillance company NSO Group. Okay ... so what did they find?
The investigation revealed the widespread, global abuse of the Pegasus spyware which NSO group licensed to their clients. Using vulnerabilities including an iOS "zero-click" exploit, NSO's clients said no thanks to the alleged terms of service set out by NSO and targeted activists, politicians and journalists across the world with impunity.
China Attributed to Exchange Attacks
Earlier in March this year it was reported that 30,000 organisations across the US were targeted by hackers using four flaws in Microsoft Exchange server. Off the bat Microsoft's Thread Intelligence Center (MSTIC) attributed a state-sponsored thread actor, dubbed Hafnium, operating out of China.
Fast forward to July 19th and the US, NATO and other allies publicly attributed China and its Ministry of State Security (MSS) for using contract hackers to conduct the attacks. Alongside the statement, the US Justice Department also announced criminal charges being brought against 4 MSS hackers for their role in a "multiyear campaign targeting foreign governments and entities in key sectors, including maritime, aviation, defense, education, and healthcare in a least a dozen countries".
size_t-to-int in Linux's fs/seq_file.c
Only July 20th Qualys dropped a security advisory detailing a size_t-to-int conversion vulnerability in the Linux kernel's filesystem layer, ultimately resulting in local privilege escalation. The vulnerability seems to have been introduced in July 2014 (Linux 3.16).
An investigation by JFrog revealed a series of supply chain style attacks targeting the popular python repository PyPi. Not the first of it's kind, attacks seek to exploit flaws in moderation and security controls in repositories in order to serve malicious code via a mix of obfuscated code and typosquatting, dependency confusion, social engineering or malicious pull requests.
In this instance JFrog estimated the malicious packages were downloaded around 30,000 times and included payloads such as discord token stealing, harvesting autocomplete data, system info gathering and remote code injection.
On August 5th Apple announced several new controversial protections for child safety in an effort to combat child sexual abuse material (CSAM).
The new features include a change to iMessage where they "will add new tools to warn children and their parents when receiving or sending sexually explicit photos", "expanding guidance in Siri and search" including intervening in CSAM related searches and finally on-device CSAM detection (currently for US citizens), scanning iCloud-synced photos and matching them against known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC).
Despite stating an emphasis on privacy regarding the design and choice of these features in their announcement, there has been a lot of questions raised regarding both the efficacy and ethics of these new features.
There's been lots of good discussion out there, particularly around the on-device scanning, and The Financial Times wrote a good article covering some of the concerns and issues around this. While fictitious, a blog post by areoform extrapolates from their own experiences with bureaucracy as well as Apple's history of conceding to authoritarian regimes, to give us a bleak depiction of what a future without push back on these kind of trends in privacy and technology could look like.
Apple have since released an internal memo noting the concerns, but doubling down on their approach, including a message from NCMEC; whose rhetoric seemingly implied the "screeching voices of the minority" of those scared and outraged by this move are so at the prospect of trying to counter CSAM and not surrounding the ethical and efficacy issues raised around Apple's approach...