Specifically, a LOAD DATA in MySQL will yield upwards of 100k inserts/sec because it gets a full table write lock and buffers the inserts at the columnar data level, not the statement level. It's closer to batch writing sectors of disk, not rows of data.
... mongoimport behave[s] similarly, though the difference isn't quite as pronounced as it is with mysqlimport on a MyISAM table.
We should also point out that grabbing these kinds of locks and making these kinds of manipulations should be done as part of careful planning since it can render the table inaccessible for long-ish periods through normal means such as queries and could require some potentially time intensive index rebuilding since indexing is turned off during some of these manipulations. (Not sure what percentage of this applies to MongoDB since it's a bit unique). Perhaps it would be good if we could work together (several of us have been experimenting with optimum buffering, database and index setups, etc.) to figure out what the best practices are in terms of initial storage, indexing, retention, archiving, etc. Matthew.