[antlr-interest] multi-core usage and pipeline processing

Ian Kaplan iank at bearcave.com
Fri Jun 13 14:33:03 PDT 2008


On Fri, Jun 13, 2008 at 2:20 PM, Loring Craymer <lgcraymer at yahoo.com> wrote:

> It should be pointed out that the reason for processing multiple files in
> parallel is the overhead of opening and closing files.  Without this
> overhead, the various threads would saturate both disk and memory bandwidth
> (optimizing disk access leads to large buffers, and competition for memory
> access results).  When dealing with large files (single file system),
> though, sequential processing is preferred because disk I/O is the limiting
> resource and competing file file accesses degrade performance due to
> frequent long-distance disk seeks.
>
> --Loring
>

 I also thought that disk I/O would be the limiting factor.  But I'm not
sure that this is as true as it used to be.  There are the network attached
storage and high speed RAID disk systems which support very high performance
disk access.  We are also starting to see the emergence of flash based
disk.  These factors are starting to change the way databases are looked at.

  Again, I think this is sort of a moot issue since in general, with
parallel make, compiles are fast enough for most people.  Throwing parallel
technology at it doesn't seem to be warranted.  But it's still interesting
to think about how one would do it if it were justified.

  Ian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.antlr.org/pipermail/antlr-interest/attachments/20080613/0610711c/attachment.html 


More information about the antlr-interest mailing list