Automating parts of Active Directory pentests with BloodHound CE

渗透技巧 8个月前 admin
220 0 0
Automating parts of Active Directory pentests with BloodHound CE

BloodHound is one of the essential tools for every Penetration Tester and Red Teamer and with the new release of BloodHound CE, BloodHound got some very nice and useful improvements. Even though BloodHound is most known for visualizing attack paths with graphs, a lot of information can be gathered by utilizing the underlying database directly. This blog post will show some examples on how the underlying database or the new API can be used to automatically find many basic weaknesses in an Active Directory environment.
BloodHound是每个渗透测试人员和Red Teamer的基本工具之一,随着BloodHound CE的新版本发布,BloodHound得到了一些非常好和有用的改进。尽管BloodHound以使用图形可视化攻击路径而闻名,但可以通过直接利用底层数据库来收集大量信息。这篇博文将展示一些示例,说明如何使用底层数据库或新 API 自动查找 Active Directory 环境中的许多基本弱点。

The script is published on our Github repositoy bloodhound-adAnalysis. Feel free to reach out to me if you have any questions or feedback.
该脚本发布在我们的Github repositoy bloodhound-adAnalysis上。如果您有任何问题或反馈,请随时与我联系。

Automating parts of Active Directory pentests with BloodHound CE
Automated Active Directory Analysis
自动活动目录分析

Introduction 介绍

BloodHound is a tool we use in pretty much every pentest where we encounter an Active Directory (AD). It can visualize complex Active Directory structures, find possible attack paths and give a good overview of the environment. At the beginning of August, the new version BloodHound CE was released, coming with some new features and significant performance improvements. Some nice additions are the API and the deployment with docker. Another thing that changed is that there are no longer objects marked as high value instead they are marked as Tier Zero now. This is a nice improvement since now all Tier Zero assets are marked in the GUI which makes them easier to identify and more assets are marked compared to BloodHound Legacy. Tier Zero assets are defined by Specter Ops in this blog post as all assets which have control over enterprise identities and their security dependencies. Since it is still an early access release, some features are missing which will come in the future like importing custom queries. For some missing features, BloodHound Legacy can still be used, e.g. to mark objects as owned or clearing the database if the neo4j database port is forwarded from docker.
BloodHound是我们在几乎所有遇到Active Directory(AD)的Pentest中使用的工具。它可以可视化复杂的Active Directory结构,找到可能的攻击路径并给出良好的环境概述。8月初,新版本BloodHound CE发布,具有一些新功能和显着的性能改进。一些不错的补充是 API 和 docker 部署。另一件更改的事情是,不再有标记为高值的对象,而是现在将它们标记为零层。这是一个很好的改进,因为现在所有零级资产都在 GUI 中标记,这使得它们更容易识别,并且与 BloodHound Legacy 相比,标记了更多资产。Specter Ops 在此博客文章中将零层资产定义为控制企业身份及其安全依赖关系的所有资产。由于它仍然是抢先体验版本,因此缺少一些功能,这些功能将来会出现,例如导入自定义查询。对于一些缺失的功能,仍然可以使用 BloodHound Legacy,例如,如果从 docker 转发 neo4j 数据库端口,则用于将对象标记为拥有或清除数据库。

Currently, we are working on automating certain findings we often find during engagements, like disabled SMB signing or computers without LAPS. Playing around with BloodHound CE, I decided to start writing a simple Python script to automate some of those findings. Since there are now four ways to interact with BloodHound I think it makes sense to make a little comparison between them and showcasing the use cases for all of them.
目前,我们正在努力自动化在参与期间经常发现的某些发现,例如禁用的 SMB 签名或没有 LAPS 的计算机。在玩弄BloodHound CE时,我决定开始编写一个简单的Python脚本来自动化其中一些发现。由于现在有四种方式可以与BloodHound进行交互,我认为在它们之间进行一些比较并展示所有这些方法的用例是有意义的。

BloodHound CE GUI / API
寻血猎犬 CE GUI / API

The BloodHound CE GUI is very nice for identifying attack paths or finding interesting targets. It gives an overview about all AD objects and their relationships between one another. For every object a lot of information is available and can be visualized, e.g. To what hosts can a user RDP to? What object controls does the user have? The biggest advantage of the GUI is visualizing longer chains and being able to easily see how each of the relationships in the chain can be exploited. BloodHound CE now works with an API in the background, which can also be used directly. The setup is very easy, and the provided python script gives a good base for working with the API. The API can also be tested and is documented inside the GUI which makes it very comfortable to get started.
BloodHound CE GUI非常适合识别攻击路径或寻找有趣的目标。它概述了所有 AD 对象及其彼此之间的关系。对于每个对象,都可以使用大量信息并且可以可视化,例如,用户可以将RDP发送到哪些主机?用户具有哪些对象控件?GUI的最大优点是可视化更长的链,并且能够轻松查看如何利用链中的每个关系。BloodHound CE现在可以在后台使用API,也可以直接使用。设置非常简单,提供的 python 脚本为使用 API 提供了良好的基础。该 API 也可以进行测试并记录在 GUI 中,这使得入门非常舒适。

Neo4j Web/Bolt Interface Neo4j 网页/螺栓接口

Another way to access the BloodHound data is through neo4j directly. The data can’t be visualized as with the GUI, but for certain use cases the raw text-based results are my preferred way. Additionally, there is the option in the web interface to export the data as csv-files which is very useful to provide the client with information regarding the affected resources, if there are many of them. One of my favorite use cases for neo4j is to skim over all descriptions (yes, that’s a lot of data). Skimming over the AD descriptions can reveal some interesting information, e.g. what a host is used for or what technologies are used inside the company. This is not really feasible in the GUI since every object needs to be accessed individually. Accessing the data with neo4j (through the web or bolt interface) allows us to retrieve certain information more comfortable like the number of results with count() or only specific attributes which can be more easily written to a file, e.g. usernames for password spraying.
访问BloodHound数据的另一种方法是直接通过neo4j。数据不能像GUI那样可视化,但对于某些用例,基于原始文本的结果是我的首选方式。此外,Web界面中还有一个选项,可以将数据导出为csv文件,这对于向客户提供有关受影响资源的信息(如果其中有很多)非常有用。我最喜欢的neo4j用例之一是浏览所有描述(是的,这是很多数据)。浏览AD描述可以揭示一些有趣的信息,例如主机的用途或公司内部使用的技术。这在 GUI 中并不可行,因为每个对象都需要单独访问。使用 neo4j 访问数据(通过 Web 或 bolt 界面)允许我们更舒适地检索某些信息,例如使用 count() 的结果数量或仅可以更容易地写入文件的特定属性,例如用于密码喷涂的用户名。

Practical Examples 实例

Now let’s get into the fun part of using BloodHound CE and see how we can automate some things.
现在让我们进入使用BloodHound CE的有趣部分,看看我们如何自动化一些事情。

Generating a list of users with specific criteria
生成具有特定条件的用户列表

Many tools are able to generate a list of users for a given domain but using the BloodHound CE API or the neo4j database instead has one big advantage: being able to filter for specific criteria. Based on specific criteria, we can filter for the most interesting users or users which will probably yield the most success. Our script generates 4 user files:
许多工具能够为给定域生成用户列表,但使用 BloodHound CE API 或 neo4j 数据库具有一个很大的优势:能够过滤特定标准。根据特定标准,我们可以筛选出最有趣的用户或可能产生最大成功的用户。我们的脚本生成 4 个用户文件:

  • enabledUsers.txt 已启用用户.txt
  • enabledTierZeroUsers.txt 已启用层零用户.txt
  • enabledInactiveUsers.txt 已启用非活动用户.txt
  • enabledPotentialAdminUsers.txt
    已启用潜在管理员用户.txt

enabledUsers.txt will be generated using the following query:
enabled用户.txt 将使用以下查询生成:

MATCH (u:User) 
    WHERE u.enabled = true 
RETURN u.name

This will simply filter out all disabled users. By filtering out the disabled users, we can drastically reduce the number of users we have to use during our next attack, e.g. password cracking. In a recent pentest, this reduced the number of users by over 50%. enabledTierZeroUsers.txt only contains the enabled Tier Zero users.
这将简单地过滤掉所有禁用的用户。通过过滤掉禁用的用户,我们可以大大减少在下一次攻击期间必须使用的用户数量,例如密码破解。在最近的一次渗透测试中,这减少了50%以上的用户数量。enabledTierZeroUsers.txt 仅包含已启用的 Tier Zero 用户。

MATCH (u:User) 
    WHERE u.enabled = true AND u.system_tags = 'admin_tier_0'
RETURN u.name

The query is rather simple due to the new system_tags attribute. This can be useful in combination with grep -f to look if the password of a Tier Zero user was cracked successfully. The enabledInactiveUsers.txt file is quite interesting since it contains enabled users with no login in the last 90 days. In many cases this means a user is not being used anymore (e.g. employee left the company) but since the user is not disabled the account can still be used. These users are good candidates for password attacks since there is a much smaller risk in locking them out in most scenarios. The query is a little more complex:
由于新的system_tags属性,查询相当简单。这与 grep -f 结合使用时很有用,可以查看零级用户的密码是否被成功破解。启用的InactiveUsers.txt文件非常有趣,因为它包含在过去90天内没有登录的启用用户。在许多情况下,这意味着用户不再被使用(例如,员工离开公司),但由于用户未被禁用,因此仍然可以使用帐户。这些用户是密码攻击的良好候选者,因为在大多数情况下锁定它们的风险要小得多。查询稍微复杂一些:

MATCH (u:User) 
    WHERE u.enabled = true AND 
        u.lastlogon < (datetime().epochseconds - (90 * 86400)) AND 
        u.lastlogontimestamp < (datetime().epochseconds - (90 * 86400)) 
RETURN u.name

To check if a user can be considered inactive, we check the lastlogon and lastlogontimestamp attributes. Both attributes contain a timestamp from the last login, but lastlogon is the login against the DC which was queried during data collection and the lastlogontimestamp is the replicated timestamp from all the other DCs. Both values must be lower than a set threshold in this case: 90 days ago from the time of running the query. This has the side effect of potentially returning different data if the query is executed at a later time again.
为了检查用户是否可以被视为非活动用户,我们检查上次登录和上次登录时间戳属性。这两个属性都包含上次登录的时间戳,但 lastlogon 是针对数据收集期间查询的 DC 的登录名,最后登录时间戳是从所有其他 DC 复制的时间戳。在这种情况下,这两个值都必须低于设定的阈值:从运行查询之日起 90 天前。这会产生副作用,即如果稍后再次执行查询,则可能会返回不同的数据。

MATCH (u:User) 
    WHERE (u.name =~ '(?i).*adm.*' OR u.description =~ '(?i).*admin.*') AND 
        u.enabled = true 
RETURN u.name

enabledPotentialAdminUsers.txt contains all users where the name contains the substring adm which is often used in names for admin users or where the description contains the word admin. This should contain potentially interesting users which are not necessarily Tier Zero but could very likely have high privileges on some systems. All the shown queries only work with neo4j directly. Implementing this with the API is possible but requires additional steps in some scenarios. Let’s take kerberoasting as an example and compare neo4j and the API.
enabledPotentialAdminUsers.txt 包含名称包含子字符串 adm 的所有用户,该子字符串通常用于管理员用户的名称中,或者描述中包含单词 admin。这应该包含潜在有趣的用户,这些用户不一定是零层,但很可能在某些系统上具有高权限。所有显示的查询只能直接与 neo4j 一起使用。可以使用 API 实现这一点,但在某些情况下需要额外的步骤。让我们以 kerberoasting 为例,比较一下 neo4j 和 API。

API vs. neo4j database with kerberoasting as example
API 与 neo4j 数据库以 kerberoasting 为例

The default query for kerberoastable users in BloodHound is:
BloodHound 中可 kerberoastable 用户的默认查询是:

MATCH (n:User)
    WHERE n.hasspn=true
RETURN n

This is a very simple query, but note that the returned users include disabled users and the user krbtgt. We can use the following python code to request the same data with the API:
这是一个非常简单的查询,但请注意,返回的用户包括禁用的用户和用户 krbtgt。我们可以使用以下 python 代码通过 API 请求相同的数据:

response = client._request('POST', '/api/v2/graphs/cypher', bytes('{"query": "MATCH (n:User) WHERE n.hasspn=true RETURN n"}', 'ascii'))

The response is some json-data containing all returned nodes with the following information:
响应是一些 json 数据,其中包含所有返回的节点,并具有以下信息:

  • label: name of the node
    标签:节点的名称
  • kind: type of node e.g. User
    种类:节点类型,例如用户
  • objectId: object ID of the node
    对象 ID:节点的对象 ID
  • isTierZero: true or false
    isTierZero:真或假
  • lastSeen: surprisingly this is not the last logon of the user, it’s the date of ingestion; probably caused by the shared codebase with BloodHound Enterprise
    最后看到:令人惊讶的是,这不是用户的最后一次登录,而是摄取的日期;可能是由与BloodHound Enterprise共享的代码库引起的

In our current reporting style, the customer receives a csv-file containing all kerberoastable users with some additional information generated by the following query against the neo4j database:
在我们当前的报告样式中,客户会收到一个包含所有可 kerberoastable 用户的 csv 文件,其中包含针对 neo4j 数据库的以下查询生成的一些附加信息:

MATCH (n:User) 
    WHERE n.hasspn=true AND 
        n.samaccountname <> 'krbtgt' 
RETURN n.name, n.objectid, n.serviceprincipalnames, n.system_tags

With the API, we could get the same information except the serviceprincipalnames (SPNs). In order to get the SPNs with the API, we would need to request every kerberoastable user again to retrieve this information. The Python code would look something like this:
使用 API,我们可以获取除服务主体名称 (SPN) 之外的相同信息。为了获取带有 API 的 SPN,我们需要再次请求每个可kerberoast 用户检索此信息。Python 代码看起来像这样:

response = client._request('POST', '/api/v2/graphs/cypher', bytes('{"query": "MATCH (n:User) WHERE n.hasspn=true RETURN n"}', 'ascii'))
data = response.json()['data']
for node in data['nodes']:
    oid = data['nodes'][node]['objectId']
    responseUser = client._request('GET', f'/api/v2/users/{oid}')
    spns = responseUser.json()['data']['props']['serviceprincipalnames']

In the script used for automating this finding, the following function is used:
在用于自动执行此查找的脚本中,使用以下函数:

def checkKerberoastableUsers(driver):
    print('===+++===+++===+++===+++===')
    print('    Checking Kerberoastable Users')
    print('===+++===+++===+++===+++===')
    q = "MATCH (n:User) WHERE n.hasspn=true AND n.samaccountname <> 'krbtgt' RETURN count(n) "
    kerberoastable, _, _ = driver.execute_query(q, database_="neo4j", routing_=RoutingControl.READ)
    q2 = "MATCH (n:User) WHERE n.hasspn=true AND n.samaccountname <> 'krbtgt' AND n.system_tags='admin_tier_0' RETURN count(n) "
    kerberoastableTierZero, _, _ = driver.execute_query(q2, database_="neo4j", routing_=RoutingControl.READ)
    print(f'There is a total of {kerberoastable[0]["count(n)"]} kerberoastable Users. This includes {kerberoastableTierZero[0]["count(n)"]} Tier Zero Accounts!')
    if kerberoastable[0]["count(n)"] > 0:
        print("Generating csv-file for: Affected Resources")
        q3 = "MATCH (n:User) WHERE n.hasspn=true AND n.samaccountname <> 'krbtgt' RETURN n.name, n.objectid, n.serviceprincipalnames, n.system_tags "
        kerberoastableData, _, _ = driver.execute_query(q3, database_="neo4j", routing_=RoutingControl.READ)
        writeCsvFile('kerberoastableUsers.csv', kerberoastableData)

This function performs three queries to gather the following information:
此函数执行三个查询来收集以下信息:

  • number of all kerberoastable users
    所有可烘焙用户的数量
  • number of all Tier Zero kerberoastable users
    所有零级可烘焙用户的数量
  • name, object ID, SPNs and system tag (Tier Zero) of all kerberoastable users
    所有可烘焙用户的名称、对象 ID、SPN 和系统标记(第 0 层)

If we find kerberoastable users, we also generate the csv-file for the customer. In our version we also generate a PoC and a description for our report which is not included here.
如果我们找到可烘焙的用户,我们也会为客户生成 csv 文件。在我们的版本中,我们还为我们的报告生成了一个 PoC 和一个描述,此处未包含。

Filtering targets for forced password changes
强制更改密码的筛选目标

If we ask BloodHound CE how to abuse the GenericWrite edge it will tell us three possible attacks: Targeted Kerberoast, Force Change Password and Shadow Credentials attack. Depending on the circumstances, we may want to perform the Force Change Password attack but don’t know what users are safe to attack since they may be active, and we disrupt the production of our client. Let’s use cypher queries to check which users are potential candidates for this attack. In the BloodHound GUI we can see all outbound object controls in the node’s entity panel, but how do we filter them or show them if it’s too many and the new safeguards prevent drawing the graph? The corresponding cypher query for the user [email protected] (and filtered for outbound control on only other users) is:
如果我们问BloodHound CE如何滥用GenericWrite边缘,它会告诉我们三种可能的攻击:有针对性的Kerberoast,强制更改密码和影子凭据攻击。根据具体情况,我们可能希望执行强制更改密码攻击,但不知道哪些用户可以安全攻击,因为他们可能处于活动状态,并且我们会中断客户端的生产。让我们使用密码查询来检查哪些用户是此攻击的潜在候选者。在 BloodHound GUI 中,我们可以在节点的实体面板中看到所有出站对象控件,但是如果它们太多并且新的保护措施阻止绘制图形,我们如何过滤或显示它们?用户ALAN_HENDERSON@TOKO5的相应密码查询。LAB(并筛选为仅对其他用户进行出站控制)为:

MATCH p=(u:User {name: '[email protected]'})-[r1:MemberOf*0..]->(g)-[r2]->(n:User) 
    WHERE r2.isacl=true
RETURN p
Automating parts of Active Directory pentests with BloodHound CE
Sequential view of outbound object control
出站对象控制的顺序视图

Ok, now we can append some filters we already used in other queries to find potential targets:
好的,现在我们可以附加一些我们已经在其他查询中使用的过滤器来查找潜在目标:

  • n.enabled = true since we can’t use disabled users for logins
    n.enabled = true 因为我们无法使用禁用的用户进行登录
  • u.lastlogon < (datetime().epochseconds - (90 * 86400)) AND u.lastlogontimestamp < (datetime().epochseconds - (90 * 86400)) since we want users which haven’t logged in for a while (here: 90 days)
    u.lastlogon < (datetime().epochseconds - (90 * 86400)) AND u.lastlogontimestamp < (datetime().epochseconds - (90 * 86400)) 因为我们希望有一段时间没有登录的用户(这里:90天)

Now we can combine everything and search for the best candidates for a forced password change attack.
现在,我们可以结合所有内容并搜索强制密码更改攻击的最佳候选者。

MATCH p=(u:User {name: '[email protected]'})-[r1:MemberOf*0..]->(g)-[r2]->(n:User) 
    WHERE r2.isacl=true AND
        n.enabled = true AND
        u.lastlogon < (datetime().epochseconds - (90 * 86400)) AND
        u.lastlogontimestamp < (datetime().epochseconds - (90 * 86400))
RETURN p

Since the AD for the test environment was generated, no login data is present and the result is the same as in the picture above. But in real environments, the results should have fewer results. Now we could look through all the returned users and identify the most interesting ones and change their password without worrying too much about locking a user from his account.
由于测试环境的 AD 已生成,因此不存在登录数据,结果与上图相同。但在实际环境中,结果应该更少。现在,我们可以浏览所有返回的用户并确定最有趣的用户并更改其密码,而无需过多担心从其帐户中锁定用户。

Uploading data with the new API
使用新 API 上传数据

One good use case for the new API is to automatically upload the collected data into BloodHound. The basic function in Python can look something like this:
新API的一个很好的用例是自动将收集的数据上传到BloodHound中。Python 中的基本函数看起来像这样:

def uploadData(client, dirToJson):
    postfix = ['_ous.json', '_gpos.json', '_containers.json', '_computers.json', '_groups.json', '_users.json', '_domains.json']
    response = client._request('POST', '/api/v2/file-upload/start')
    uploadId = response.json()['data']['id']
    for file in postfix:
        filename = glob.glob(dirToJson + '/*' + file)
        print(f'Uploading: {filename}')
        with open(filename[0], 'r', encoding='utf-8-sig') as f:
            data = f.read().encode('utf-8')
            response = client._request('POST', f'/api/v2/file-upload/{uploadId}', data)
    response = client._request('POST', f'/api/v2/file-upload/{uploadId}/end')
    print('Waiting for BloodHound to ingest the data.')
    response = client._request('GET', '/api/v2/file-upload?skip=0&limit=10&sort_by=-id')
    status = response.json()['data'][0]
    while True:
        if status['id'] == uploadId and status['status_message'] == "Complete":
            break
        else:
            time.sleep(15)
            response = client._request('GET', '/api/v2/file-upload?skip=0&limit=10&sort_by=-id')
            status = response.json()['data'][0]
    print('Done! Continuing now.')

The dirToJson variable is a simple string containing the path to the json files without the trailing /, e.g. /customer/bloodhound. First, we must use the /api/v2/file-upload/start API endpoint to create a new file upload job. Then we upload our collected json files to /api/v2/file-upload/{file_upload_id} with the content of our json files in the body of the request. The needed file_upload_id will be returned in the /api/v2/file-upload/start response. After uploading all files, we have to notify BloodHound that the upload is done and the data can be ingested into the database. Now we periodically use the API endpoint /api/v2/file-upload?skip=0&limit=10&sort_by=-id and check if the status for the newly created job is Completed. After completing the ingetion we can start analysing the data.
dirToJson 变量是一个简单的字符串,包含 json 文件的路径,没有尾随的 /,例如 /customer/bloodhound。首先,我们必须使用 /api/v2/file-upload/start API 端点来创建新的文件上传作业。然后我们将收集到的 json 文件上传到 /api/v2/file-upload/{file_upload_id},并将 json 文件的内容放在请求正文中。所需的file_upload_id将在 /api/v2/file-upload/start 响应中返回。上传所有文件后,我们必须通知 BloodHound 上传已完成,可以将数据摄取到数据库中。现在,我们会定期使用 API 端点 /api/v2/file-upload?skip=0&limit=10&sort_by=-id 并检查新创建的作业的状态是否为“已完成”。完成导入后,我们可以开始分析数据。

Shortest paths to Tier Zero from owned user
从自有用户到零层的最短路径

The new Tier Zero tag allows us to extend our search for attack path even further, but since the query takes more time than e.g. Shortest paths to Domain Admins this often runs in a timeout. With small modifications to the Shortest paths to high value/Tier Zero targets it is possible to run this query with targeted starting points and hopefully finish before the timeout hits:
新的零层标签允许我们进一步扩展对攻击路径的搜索,但由于查询比域管理员的最短路径花费更多时间,这通常会在超时时运行。通过对到高价值/零层目标的最短路径进行少量修改,可以使用目标起点运行此查询,并希望在超时之前完成:

MATCH p=shortestPath((n {name: '[email protected]'})-[:Owns|GenericAll|GenericWrite|WriteOwner|WriteDacl|MemberOf|ForceChangePassword|AllExtendedRights|AddMember|HasSession|Contains|GPLink|AllowedToDelegate|TrustedBy|AllowedToAct|AdminTo|CanPSRemote|CanRDP|ExecuteDCOM|HasSIDHistory|AddSelf|DCSync|ReadLAPSPassword|ReadGMSAPassword|DumpSMSAPassword|SQLAdmin|AddAllowedToAct|WriteSPN|AddKeyCredentialLink|SyncLAPSPassword|WriteAccountRestrictions*1..]->(m))
WHERE m.system_tags = "admin_tier_0" AND n<>m
RETURN p

In this example, we set the starting point to a user with the name [email protected], but we could also choose computer or group names. If we mark users as owned in BloodHound Legacy or with additional tools like CrackMapExec we can change the {name: ‘[email protected]’} to {owned: true} and look from multiple starting points at ones. This could potentially lead to a timeout but allows us to find more potential attack paths.
在此示例中,我们将起点设置为名为 LISA_MASSEY@TOKO5 的用户。LAB,但我们也可以选择计算机或组名称。如果我们将用户标记为拥有BloodHound Legacy或使用CrackMapExec等其他工具,我们可以更改{name:“LISA_MASSEY@TOKO5。LAB’} 到 {owned: true} 并从多个起点查看一个。这可能会导致超时,但允许我们找到更多潜在的攻击路径。

Automated script 自动化脚本

The provided script covers some basic findings we often encounter in our pentests and are easy to automate. Currently the following tasks will be performed:
提供的脚本涵盖了我们在渗透测试中经常遇到的一些基本发现,并且易于自动化。目前将执行以下任务:

  • collecting basic information about total users, groups, etc. using the BloodHound API
    使用 BloodHound API 收集有关用户总数、组数等的基本信息
  • generating the different user lists
    生成不同的用户列表
  • checking if LAPS is enabled on all computer objects
    检查是否在所有计算机对象上启用了 LAPS
  • checking if computers have unsupported windows versions
    检查计算机是否具有不受支持的 Windows 版本
  • checking for inactive users and computers
    检查非活动用户和计算机
  • checking the age of the krbtgt password
    检查 krbtgt 密码的年龄
  • checking the number of sensitive users (Domain Admins and Tier Zero) and if they are in the Protected Users group
    检查敏感用户(域管理员和零层)的数量,以及他们是否在“受保护的用户”组中
  • checking if the guest account is active
    检查来宾帐户是否处于活动状态
  • checking for kerberoastable and AS-REP-roastable users
    检查可烘焙和 AS-REP 可烘焙用户
  • checking for active Tier Zero sessions
    检查活动的零层会话
  • checking for Kerberos Delegation (Constrained, Unconstrained and Resource-based Constrained)
    检查 Kerberos 委派(受约束、不受约束和基于资源的约束)
  • checking for DCSync for non Tier Zero objects
    检查非零层对象的 DCSync
  • generating a file with all descriptions
    生成包含所有描述的文件

All these findings are primarily to identify missing best practices. These findings normally take a good amount of time during a pentest. Running all necessary tests manually and documenting them is painful. Automating this process as much as possible leaves more time during the engagement for compromising the AD or testing some other targets in greater detail. In order for you to run this script there are some steps to follow:
所有这些发现主要是为了找出缺失的最佳实践。这些发现通常需要在渗透测试期间花费大量时间。手动运行所有必要的测试并记录它们是痛苦的。尽可能自动化此过程会在参与期间留出更多时间来破坏 AD 或更详细地测试其他目标。为了运行此脚本,需要遵循一些步骤:

  1. setup BloodHound CE using the provided docker-compose files and enable the neo4j port
    使用提供的 docker-compose 文件设置 BloodHound CE 并启用 neo4j 端口
  2. generate the API token and download the current version of SharpHound from the GUI
    生成 API 令牌并从 GUI 下载当前版本的 SharpHound
  3. enter your API token (and the changed neo4j password, if applicable) into the script
    在脚本中输入您的 API 令牌(以及更改的 neo4j 密码,如果适用)
  4. run SharpHound from a domain joined host
    从加入域的主机运行 SharpHound
  5. extract the .zip archive 提取.zip存档
  6. run the following commands
    运行以下命令
python -m venv adAnalysis
source adAnalysis/bin/activate
python -m pip install neo4j requests
python adAnalysis.py -d <pathToJsonDir>

The script will print out all the findings and write the files in the current directory. The following data will be written to the csv-files:
该脚本将打印出所有结果并将文件写入当前目录。以下数据将写入 csv 文件:

  • laps.csv: computer name, computer objectid
    圈数.csv:计算机名称、计算机对象 ID
  • unsupportedOs.csv: computer name, computer objectid
    不支持的操作系统.csv:计算机名称、计算机对象 ID
  • inactiveUsers.csv: username, user objectid, is user enabled (true or false), is user admin (true or false)
    inactiveUsers.csv:用户名、用户对象 ID、是否启用用户(真或假)、是用户管理员(真或假)
  • inactiveComputers.csv: computer name, computer objectid, is computer enabled (true or false)
    inactiveComputers.csv:计算机名称、计算机对象 ID、是否启用了计算机(真或假)
  • domainAdmins.csv: username, user objectid
    domainAdmins.csv:用户名、用户对象 ID
  • tierZeroUsers.csv: username, user objectid
    tierZeroUsers.csv:用户名、用户对象
  • kerberoastableUsers.csv: username, user objectid, user service principal names, user system tags (Tier Zero os NULL)
    kerberoastableUsers.csv:用户名、用户对象 ID、用户服务主体名称、用户系统标记(零层 os NULL)
  • asrepRoastableUsers.csv: username, user objectid, user system tags (Tier Zero os NULL)
    asrepRoastableUsers.csv:用户名、用户对象 ID、用户系统标签(Tier Zero os NULL)
  • tierZeroSessions.csv: username, user objectid, computer name, computer objectid
    tierZeroSessions.csv:用户名、用户对象 ID、计算机名称、计算机对象 ID
  • dcsync.csv: username, user objectid
    dcsync.csv:用户名、用户对象 ID
  • constrainedDelegation.csv: username, user objectid, computer name, computer objectid
    约束委派.csv:用户名、用户对象 ID、计算机名称、计算机对象 ID
  • unconstrainedDelegation.csv: object name, object objectid
    不受约束的委派.csv:对象名称、对象对象 ID
  • resourcebasedConstrainedDelegation.csv: object name(allowed to act), object objectid(allowed to act), object name(target object), object objectid(target object)
    资源基于约束委派.csv:对象名称(允许操作)、对象对象 ID(允许操作)、对象名称(目标对象)、对象对象 ID(目标对象)

Conclusion 结论

The new BloodHound CE looks very promising and even though it’s still in early access it has some nice improvements over the legacy version. The new API gives another way of interacting with BloodHound which can be used to automate some tasks or retrieve data in a text-based form to work with. The plans for future features also look very interesting, e.g. collecting and analyzing AD CS with BloodHound. While automating the basic tasks can significantly reduce the work during pentests some manual analysis still has to be done to identify more complex weaknesses. But having a little more time during an engagement allows us to take a deeper look at other components or playing through different attack scenarios like a privilege escalation to sensitive files or other critical systems.
新的BloodHound CE看起来非常有前途,尽管它仍处于抢先体验阶段,但它对旧版本有一些不错的改进。新的API提供了另一种与BloodHound交互的方式,可用于自动执行某些任务或以基于文本的形式检索数据。未来功能的计划看起来也非常有趣,例如使用BloodHound收集和分析AD CS。虽然自动化基本任务可以显着减少渗透测试期间的工作,但仍需要进行一些手动分析以识别更复杂的弱点。但是,在参与期间有更多的时间,我们可以更深入地了解其他组件或处理不同的攻击场景,例如权限升级到敏感文件或其他关键系统。

Cheers,  干杯

Robin Meier 罗宾·迈耶

原文始发于8com:Automating parts of Active Directory pentests with BloodHound CE

版权声明:admin 发表于 2023年8月23日 上午9:25。
转载请注明:Automating parts of Active Directory pentests with BloodHound CE | CTF导航

相关文章

暂无评论

您必须登录才能参与评论!
立即登录
暂无评论...