LOLDrivers and HVCI

Michael Haag
magicswordio
Published in
6 min readDec 22, 2023

--

Friends, we meet again for another behind-the-scenes look at the LOLDrivers project. Lurking in our backlog for some time was the integration of Trail of Bits’ HVCI LOLDrivers Check script into the project. Our main goal was two-fold: first, to add a new tag in the YAML indicating whether a driver loads with HVCI enabled or not; and second, to generate files based on this information. This is crucial because it pinpoints a set of drivers that may still load by default in Windows. By incorporating this into the project and creating new files for these drivers, we aim to highlight a smaller, yet more potent set of drivers with potential for abuse. This addition will aid defenders in narrowing down the scope of hashes to block, names to track, and deploying new features more effectively.

First — what is HVCI and why does this matter?

What exactly is HVCI? To quote directly from Microsoft: “Memory integrity is a virtualization-based security (VBS) feature available in Windows 10, Windows 11, and Windows Server 2016 and later. Memory integrity and VBS improve the threat model of Windows by offering stronger protections against malware that tries to exploit the Windows kernel. VBS uses the Windows hypervisor to create an isolated virtual environment, establishing a root of trust for the OS under the assumption that the kernel can be compromised. Memory integrity is a key component that safeguards and fortifies Windows by executing kernel mode code integrity checks within the secure, isolated VBS environment. It also limits kernel memory allocations that could be leveraged to compromise the system, ensuring that kernel memory pages are executable only after passing code integrity checks in the secure runtime environment. Additionally, executable pages are never writable.”

Incorporating this feature into our project underscores a select group of drivers that can potentially bypass HVCI checks. Does this mean a driver in the project is actively being abused in the wild? Not necessarily. Does it mean these are the ones adversaries will use next? Maybe? However, it does identify a group of drivers that should be on your radar. Naturally, if you have HVCI and other Windows controls in place, keeping an eye on these drivers is advisable.

Trail of Bits HVCI LOLDrivers Check

For this endeavor, we ran the script in it’s current state and received the following output.

Now to automate this a bit more, we modified the script to output to CSV and tag each line as allowed or blocked — based on the HVCI response. Now the output looks like this:

CSV version is here https://gist.github.com/MHaggis/8d6de45b883b338e47de08b3cb4c9819

Now we have a CSV of drivers that possibly load with HVCI enabled.

CSV: https://github.com/magicsword-io/LOLDrivers/blob/main/bin/hvci_drivers.csv

New Tags, Who Dis?

Now, we know we’ll need to modify every yaml file in the project with a new tag — LoadsDespiteHVCI — TRUE or FALSE.

We crafted a simple Python script that reads the CSV and recursively goes through the yaml directory matches hashes and adding tags.

Or as Nasreddine Bencherchali put it:

  • Loop through the files in the yaml folder
  • Open it and read as dict with yaml lib
  • Loop through the knownvulnsamples using the enumerate to get the index as well
  • Check with the content of CSV
  • Update the dict accordingly
  • Re-write the new dict as yaml
import csv
import os
import re
import yaml

yaml_directory = '../yaml'
csv_file_path = 'hvci_drivers.csv'
paths = ["../yaml"]

class NoAliasDumper(yaml.Dumper):
def ignore_aliases(self, data):
return True

def get_hashes_from_csv(csv_file_path):

allowed_drivers = []
disallowed_drivers = []

with open(csv_file_path, mode='r', newline='', encoding='utf-8') as file:
csv_reader = csv.DictReader(file)
for row in csv_reader:
if row['Status'] == "Allowed":
row.pop('Status', None)
allowed_drivers += [x for x in list(row.values()) if x]
else:
row.pop('Status', None)
disallowed_drivers += [x for x in list(row.values()) if x]

return allowed_drivers, disallowed_drivers

allowed_drivers, disallowed_drivers = get_hashes_from_csv(csv_file_path)

def yield_yaml_file(path_to_yaml_folder: list) -> str:
for path_ in path_to_yaml_folder:
for root, _, files in os.walk(path_):
for file in files:
if file.endswith(".yaml"):
yield os.path.join(root, file)

for file in yield_yaml_file(paths):
with open(file, encoding="utf-8") as f:
data = yaml.safe_load(f)
vuln_samples = data['KnownVulnerableSamples']
for index, sample in enumerate(vuln_samples):
# We get the hashes of the sample
md5 = sample.get('MD5') or ''
sha1 = sample.get('SHA1') or ''
sha256 = sample.get('SHA256') or ''

if md5:
if md5 in allowed_drivers:
data['KnownVulnerableSamples'][index]['LoadsDespiteHVCI'] = 'TRUE'
else:
data['KnownVulnerableSamples'][index]['LoadsDespiteHVCI'] = 'FALSE'
elif sha1:
if sha1 in allowed_drivers:
data['KnownVulnerableSamples'][index]['LoadsDespiteHVCI'] = 'TRUE'
else:
data['KnownVulnerableSamples'][index]['LoadsDespiteHVCI'] = 'FALSE'
elif sha256:
if sha256 in allowed_drivers:
data['KnownVulnerableSamples'][index]['LoadsDespiteHVCI'] = 'TRUE'
else:
data['KnownVulnerableSamples'][index]['LoadsDespiteHVCI'] = 'FALSE'
with open(file, 'w') as outfile:
yaml.dump(data, outfile, default_flow_style=False, sort_keys=False, Dumper=NoAliasDumper)

print("For better or for worse. THe script has finished executing :feels-good:")

Now — run it:

python hvcitag.py 

For better or for worse. THe script has finished executing :feels-good:

Too easy!

Now all the files have a tag for each individual known vulnerable sample.

In this instance, albeit in the same yaml file, one will load and another will not. This provides granular visibility that we didn’t have before.

Generate Files

Another great script in our back pocket produced by Jose E Hernandez and Nasreddine Bencherchali, is our gen-files.py. This script manages all our detection files — Sysmon, Sigma, hash lists, ClamAV, and so forth. LOTS of files.

We now want to generate similar files but for the drivers that load despite HVCI being enabled — based on our new tag.

def gen_loadsdespitehvci_lists(category_):
"""
Generates lists of hashes for LoadsDespiteHVCI being TRUE
"""
md5_list = []
sha1_list = []
sha256_list = []
imphash_list = []
for file in yield_next_rule_file_path(path_to_yml):
category = get_yaml_part(file_path=file, part_name="Category")
if category_.lower() == category.lower():
known_vuln_samples = get_yaml_part(file_path=file, part_name="KnownVulnerableSamples")
if known_vuln_samples:
for i in known_vuln_samples:
loads_despite_hvci = i.get('LoadsDespiteHVCI', 'FALSE')
if loads_despite_hvci == 'TRUE':
if 'MD5' in i and i['MD5'] != "-":
md5_list.append(i['MD5'])
if 'SHA1' in i and i['SHA1'] != "-":
sha1_list.append(i['SHA1'])
if 'SHA256' in i and i['SHA256'] != "-":
sha256_list.append(i['SHA256'])
if 'Imphash' in i and i['Imphash'] != "-":
imphash_list.append(i['Imphash'])

md5_list = list(filter(None,list(set([i.lstrip().strip().lower() for i in md5_list]))))
sha1_list = list(filter(None,list(set([i.lstrip().strip().lower() for i in sha1_list]))))
sha256_list = list(filter(None,list(set([i.lstrip().strip().lower() for i in sha256_list]))))
imphash_list = list(filter(None,list(set([i.lstrip().strip().lower() for i in imphash_list]))))

return md5_list, sha1_list, sha256_list, imphash_list


def gen_loadsdespitehvci_authentihash_lists(category_):
"""
Generates lists of authentihash of samples that load despite hvci
"""
authentihash_md5_list = []
authentihash_sha1_list = []
authentihash_sha256_list = []
for file in yield_next_rule_file_path(path_to_yml):
known_vuln_samples = get_yaml_part(file_path=file, part_name="KnownVulnerableSamples")
category = get_yaml_part(file_path=file, part_name="Category")
if category_.lower() == category.lower():
if known_vuln_samples:
for i in known_vuln_samples:
loads_despite_hvci = i.get('LoadsDespiteHVCI', 'FALSE')
if loads_despite_hvci == 'TRUE':
if 'Authentihash' in i:
for key, value in i['Authentihash'].items():
if key == "MD5" and value != "-":
authentihash_md5_list.append(value)
if key == "SHA1" and value != "-":
if i['SHA1'] != "-":
authentihash_sha1_list.append(value)
if key == "SHA256" and value != "-":
if i['SHA256'] != "-":
authentihash_sha256_list.append(value)

# Remove leading and trailing spaces as well as any duplicates
authentihash_md5_list = list(set([i.lstrip().strip().lower() for i in authentihash_md5_list]))
authentihash_sha1_list = list(set([i.lstrip().strip().lower() for i in authentihash_sha1_list]))
authentihash_sha256_list = list(set([i.lstrip().strip().lower() for i in authentihash_sha256_list]))

return authentihash_md5_list, authentihash_sha1_list, authentihash_sha256_list

Alright — so now we can generate new files based on TRUE for loading with HVCI.

and sigma, and Sysmon, and hash lists… :)

A sampling of updated pages:

In Summary

Thanks to the team at Trail of Bits for crafting this utility! We are excited to share more narrow detection content and help teams scope where to start. As mentioned by Trail of Bits in their initial release — take it for what you want. Is this 100%? Maybe. Is this narrowing LOLDrivers down to ones that may load right now in your environment? Possibly.

Feedback? Questions? Assistance? Open a Git Issue or hit us up on DM.

Happy Hunting!

--

--

Michael Haag
magicswordio

I write, sometimes, about InfoSec related topics and I love coffee.