This is my personal, unofficial convenience mirror of ...
MDXfind, the CPU-based hash-cracking tool
Official site: https://hashes.org/mdxfind.php (offline)
(mirror last refreshed/checked: 2024-01-22 - MDXfind version 1.120, 2024-01-22)
"MDXfind is a program which allows you to run large numbers of unsolved hashes, using many algorithms, against large numbers of plaintext words, very quickly."
- Waffle
Jump to download
MDXfind features (my perspective - not an official list):
- Multi-platform: AIX, ARMv6, ARMv7, ARMv8, FreeBSD 8.1+, Linux (32/64), macOS/OS x (x86, M1), Power8, Windows (32/64)
- Multi-algorithm: Can try 357 different core algorithm combinations/variants as observed in the wild - in parallel in a single job, using Judy arrays
- Multi-iteration: can try thousands of iteration counts of any of these core algorithms - also in a single job (effectively millions of end-result algorithms)
- Efficient handling of very large hashlists (100M+) and large wordlists
- Can handle plaintexts of lengths up to 10,000 characters
- Directory recursion for wordlists
- Can take input from stdin
- Can process lists of hashes with mixed algorithms types (output indicates the algorithm; use mdsplit to separate out into per-algorithm lists)
- Supports simple regex for including and excluding hash types by name
- Ability to skip X words from beginning of a wordlist (can be used for simple distribution of work)
- Support for rotated and truncated hashes
- Real-world transformation automation: email address munging, Unicode expansion, HTML escapes
- Read salts, usernames, suffixes, and/or rules from external files
- Configurable CPU thread count
- Apply multiple rules files (either in series or as dot-product)
- Ability to generate any supported hashes and iteration counts (using -z)
When to use it
- When you have a mix of hash types
- When you're not sure what type of hash you have
- When you have many words to try on many hashes
- On GPU-unfriendly algorithms
- To quickly cull common plains from a very large hashlist
- To quickly process many previous hashlists - with new candidate plaintexts, when new algorithms appear, with new rules, etc.
When not to use it - so far ;)
- For mask and brute-force attacks on GPU-friendly algorithms
- For distributed hash clusters (hashtopolis, etc do not support it ... yet)
Distribution metadata (generated by me, not part of the distribution):
Help and examples
Introductory video
from
@winxp5421
Basic example (adapted from @winxp5421):
mdxfind -h ALL -h '!salt,!user,!md5x,!crypt' -i 5 -f unknown.txt /dict/top-10k-pass | tee -a out.res
-h ALL # search every algorithm
-h '!salt,!user,!md5x,!crypt' # exclude explicitly salted (!salt), MD5x, and some implicitly salted (!crypt)
# MD5x excludes 'internally iterated' hashes such as 'sha1(md5(md5(md5("hello")))']
-i 5 # iterate from 1 to 5 times, inclusive
-f unknown.txt # target hashlist
/dict/top-10k.pass # wordlist (input dictionary)
| tee -a out.res # append to outfile (where the results go).
(Using '| tee -a out.res' lets you watch the output; use ' > out.res' instead if you won't want to watch or tee isn't available on your system.)
To change the default behavior of using all available cores, use ' -t [N]' where N is the number of cores to use.
Threads with more info and examples:
Moved/broken references, necromancy TBD:
Google search for MDXfind on Hashkiller forums |
usage from hashes.org forums |
thread 1 |
thread 2 |
thread 3
Examples of how to use mdxfind's -z flag to generate arbitrary hashes:
Info on output and tuning, from Waffle:
- "line" is the line number in the current file. It's not random - it reads a "memory chunk" of data from the file, then breaks that into lines (so there will be a lot of lines if each line is a very short password, and there will be fewer lines if each line is a long password). It increases as block of lines is run through all of the algorithms specified (might be just one algorithm, or you might have access them all).
- Take the line number you see, and back up a few thousand (just for good measure, as there's no way to know exactly which line it was on as the comfort messages update about every 15 seconds, and mdxfind can process millions per second), and use that in the -w for the next run.
- The w= argument shows how "busy" mdxfind is. If you have a lot of cores, this number can be very high. It is actually the number of work units waiting in the queue to be processed, where each work unit is an algorithm and block of lines to be processed. If you are using a simple algorithm, like MD5, most times MDXfind will out-pace the speed of your disk, and w= will be a very low number. Doing a "hard" algorithm like MD5DSALT, or BCRYPT for example, this number will be high, meaning that your hard drive is able to produce data faster than the algorithms can consume it.
Usage output
See usage.txt
Kudos
All credit to Waffle (and hops) for this powerful hash-cracking tool. A serious amount of careful thought and insight has gone into making it reliable and efficient.
References
See also
- rling - remove duplicates (and optionally sort), very quickly, and with a variety of memory/speed/storage trade-off options. Also from Waffle and team. See this thread for theory and examples.
Name Last modified Size Description
Parent Directory -
archive/ 2024-01-24 02:26 -
file-output.txt 2024-01-24 21:23 4.2K file(1) of all files - binary/executable details (not distro)
CHECKSUMS.MD5.txt 2024-01-24 21:23 1.3K MD5 checksums (not distro)
CHECKSUMS.SHA256.txt 2024-01-24 21:23 2.2K SHA256 checksums (not distro)
usage.txt 2024-01-24 21:23 2.4K usage output for mdxfind and mdsplit (not distro)
algorithms.txt 2024-01-24 21:23 7.1K list of supported algorithms, one per line (not distro)
mdxfind-latest.zip 2024-01-24 21:21 15M symlink to latest zip file (hopefully)
mdxfind-1.120.zip 2024-01-24 21:21 15M
CHANGES.txt 2024-01-24 02:31 619 Change log (not distro)
mdxfind.aix 2024-01-23 12:24 2.9M
mdxfind.freebsd 2024-01-23 01:30 2.7M
mdxfind.static 2024-01-23 01:20 3.4M Linux static binary
mdxfind 2024-01-23 01:20 2.1M
mdsplit.arm8 2024-01-23 01:14 123K
mdxfind.arm8 2024-01-23 01:09 2.1M
mdsplit.power8 2024-01-23 00:54 259K
mdxfind.power8 2024-01-23 00:53 2.7M
mdxfind.arm6 2024-01-23 00:36 2.5M
mdxfind.arm7 2024-01-23 00:29 2.5M
mdxfind.exe 2024-01-23 00:20 2.7M
mdxfind-32.exe 2024-01-23 00:20 2.8M
mdsplit.arm6 2024-01-23 00:20 74K
mdxfind-32 2024-01-23 00:18 2.8M
mdxfind.macm1 2024-01-23 00:11 2.0M
mdxfind.mac 2024-01-23 00:09 2.7M
mdsplit.mac 2024-01-22 17:34 161K
mdsplit.macm1 2024-01-22 16:07 166K
mdsplit-32.exe 2020-09-22 08:17 86K
mdsplit.exe 2020-09-22 08:17 163K
mdsplit.aix 2020-06-24 18:38 294K
mdsplit.txt 2019-09-01 22:33 2.5K old Perl version of mdsplit - for reference
mdsplit-32 2017-11-02 19:13 82K
mdsplit.static 2017-11-02 19:12 954K Linux static binary
mdsplit 2017-11-02 19:12 27K
mdsplit.arm7 2017-11-02 19:10 70K
mdsplit.freebsd 2017-09-25 13:36 152K
.. Up to /pub/bin
<- Back to Tech Solvency