I'm sorry but that's just bollocks. Such a crawler would not do anything else than what google does. And a server most certainly can handle a single crawler downloading a bunch of pages.
And his computer can easily handle the contact of the FTB wiki. It's nothing special. Heck, for a project for my previous employer we downloaded the entire wikipedia and loaded that on a single desktop system to showcase some advanced search technology. No issue at all.
As for running the server on your computer, yes, it can work, assuming you have the same wiki engine and all, and it doesn't take that much resources.
But the fact is that it takes a database and a PHP server on your computer, and that does take some power off your machine.
I do have all that on my computer, for work purpose, and yes, it is not that taxing, but on computers that only have one CPU core (yes, they still exist, sadly), or other computers that aleady don't run Minecraft all that well, running additionnal programs that take hard drive read time and cpu can indeed impact the game experience.
I am not omnipotent, so I don't know about everyone's knowledge about running php/mysql and computer.
What I said is based on the fact that it does take processing power, and some people might not have that much left to spare.
As for the crawlers, yes google does it.
But google does it once every so often, and has processes that tries to optimize the thing, trying not to do it on pages that didn't change since the last time.
Let's just say that 10 people wanted to use crawlers.
It would mean 10 times more than what google did, for once, and that would only provide it to a single person.
These people would probably do an other crawl from times to times to keep their version up to date, taking yet again server time for their own purpose.
As you know, the servers where the wiki/forums/site are running on are not all that great, and have troubles from times to times when there is too much load on them.
That means that the people taking large amount of cpu and bandwidth for their own purpose will hurt the other users.
I call these people egoistical, nothing more.
When the wiki are ready, if they choose to give a way to have an offline version, it would most probably be through a download of a sql dump and a link to the wiki engine used.
Sql dumps are most likely being done anyway, for backups, so it is usually a good way to go.
But that is only for if they want people to have an offline version for themselves.
While I do agree that the OP case is special and that offline mode could really be good for him, I don't believe that most of the people that would use it would indeed be offline.
Plus, that makes an other problem, where people might complain about false/outdated informations, when they have a database that is days or monthes behind because they didn't take the newest dump before complaining.
Let's not forget that putting a dump in a mysql is easy for us, but I know for a fact that it is still too much work for some of the people that don't understand how all of that work (and don't want to understand, just want it to work).
I do apologize for the long posts, but I do feel like I need to explain my point when I see people pointing at a specific point that I feel is not accurate or relevant in the situation.
You are of course totally free to disagree or ignore the entire post if you so desire, and it might contain errors and imperfections.