xavier

joined 1 year ago
MODERATOR OF
[–] xavier@infosec.pub 5 points 1 year ago (1 children)

Have you tried Open Camera or Cymera? May be easier to use another camera app.

[–] xavier@infosec.pub 6 points 1 year ago

AHHH! NATION STATE!!!

[–] xavier@infosec.pub 11 points 1 year ago (1 children)

This applies to you monogamous people, as well as us poly folks.

 

Is there a setting to default all external links to a new tab? I'm used to that behavior from infosec.exchange. I keep finding myself having to reopen infosec.pub after going down a rabbit hole.

[–] xavier@infosec.pub 12 points 1 year ago (4 children)

I'm currently going through Niven's work. I'd suggest The Mote in God's Eye by Larry Niven and Jerry Pournelle.

Project Hail Mary by Andy Weir is also an amazing book if you haven't read it yet.

One more suggestion: Rendezvous with Rama by Arthur C. Clarke

 

Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.

This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.

 

A decompiler-unified plugin by Zion Basque that leverages the OpenAI API to enhance your decompilation process by offering function identification, function summarisation and vulnerability detection. The plugin currently supports IDA, Binja and Ghidra.

 

Codex Decompiler is a Ghidra plugin that utilizes OpenAI's models to improve the decompilation and reverse engineering experience. It currently has the ability to take the disassembly from Ghidra and then feed it to OpenAI's models to decompile the code. The plugin also offers several other features to perform on the decompiled code such as finding vulnerabilities using OpenAI, generating a description using OpenAI, or decompiling the Ghidra pseudocode.

 

In this post, I introduce a new Ghidra script that elicits high-level explanatory comments for decompiled function code from the GPT-3 large language model. This script is called G-3PO. In the first few sections of the post, I discuss the motivation and rationale for building such a tool, in the context of existing automated tooling for software reverse engineering. I look at what many of our tools — disassemblers, decompilers, and so on — have in common, insofar as they can be thought of as automatic paraphrase or translation tools. I spend a bit of time looking at how well (or poorly) GPT-3 handles these various tasks, and then sketch out the design of this new tool.

If you want to just skip the discussion and get yourself set up with the tool, feel free to scroll down to the last section, and then work backwards from there if you like.

The Github repository for G-3PO can be found HERE.

 

LLDB plugin which queries OpenAI's davinci-003 language model to speed up reverse-engineering. Treat it like an extension of Lisa.py, an Exploit Dev Swiss Army Knife.

At the moment, it can ask davinci-003 to explain what the current disassembly does. Here is a simple example of what results it can provide:

 

IDAPython script by Daniel Mayer that uses the unofficial ChatGPT API to generate a plain-text description of a targeted routine. The script then leverages ChatGPT again to obtain suggestions for variable and function names.

 

Gepetto is a Python script which uses OpenAI's gpt-3.5-turbo and gpt-4 models to provide meaning to functions decompiled by IDA Pro. At the moment, it can ask gpt-3.5-turbo to explain what a function does, and to automatically rename its variables.

 

This is a little toy prototype of a tool that attempts to summarize a whole binary using GPT-3 (specifically the text-davinci-003 model), based on decompiled code provided by Ghidra. However, today's language models can only fit a small amount of text into their context window at once (4096 tokens for text-davinci-003, a couple hundred lines of code at most) -- most programs (and even some functions) are too big to fit all at once.

GPT-WPRE attempts to work around this by recursively creating natural language summaries of a function's dependencies and then providing those as context for the function itself. It's pretty neat when it works! I have tested it on exactly one program, so YMMV.

 

Automated Audit Log Forensic Analysis (ALFA) for Google Workspace is a tool to acquire all Google Workspace audit logs and perform automated forensic analysis on the audit logs using statistics and the MITRE ATT&CK Cloud Framework.

By Greg Charitonos and BertJanCyber

 

cross-posted from: https://infosec.pub/post/86834

This is an excellent series on container security fundamentals by Rory McCune who is a bit of an authority in this field:

 

Hey everyone! Since we're creating a new community here, I'd love to hear who's here.

I've been doing security for a bit over 30 years now. Made it up to a divisional CISO, then climbed back down the ladder to find a good work/life balance. Currently part of the security leadership team at a large US bank. I run a couple of teams right now, including a firewall policy engineering team and a production support center of excellence. I'm looking forward to seeing what type of community we can build here.

view more: next ›