[RAS] memory problem

Christian Zimmermann christian.zimmermann at uconn.edu
Thu Mar 25 20:56:48 CDT 2010


Thanks. I even managed to add the paper to her profile.

Christian Zimmermann                                     FIGUGEGL!
Department of Economics
University of Connecticut
341 Mansfield Road, Unit 1063
Storrs, CT 06269-1063
http://ideas.repec.org/zimm/   christian.zimmermann at uconn.edu
http://ideas.repec.org/e/pzi1.html

On Thu, 25 Mar 2010, Thomas Krichel wrote:

>  Christian Zimmermann writes
>
>> I have repeatedly over the last weeks tried to updareq
>> RePEc/tor/tecipa, because an author cannot claim her (only) paper.
>> From log files I see:
>>
>> RECORD PROCESS: repec:tor:tecipa:tecipa-396|ReDIF-Paper 1.0
>> RECORD NEW: repec:tor:tecipa:tecipa-396
>> U DATAFILE_FINISH: tor/tecipa/out1.rdf
>> save record problem: Inappropriate ioctl for device / Logging region
>> out of memo
>> ry; you may need to increase its size
>> save record problem: Inappropriate ioctl for device / Logging region
>> out of memo
>> ry; you may need to increase its size at
>> /home/aras/acis/lib/RePEc/Index/Update.
>> pm line 899, <FILE> line 7084.
>> processed 'tor/tecipa/', found: out1.rdf
>>
>
>  It's a problem with rid's berkeley db. It's a known issue that has
>  been plaguing us for a while. Let me try to fix.
>
>  Here is apparently how we find out about the log sizes
>
> aras at nebka:~/acis/RI/data$ db4.6_stat -l -h .
> 0x40988 Log magic number
> 13      Log version number
> 31KB 256B       Log record cache size
> 0       Log file mode
> 10Mb    Current log file size
> 1409130 Records entered into the log
> 347MB 192KB 618B        Log bytes written
> 0       Log bytes written since last checkpoint
> 666610  Total log file I/O writes
> 2099    Total log file I/O writes due to overflow
> 664731  Total log file flushes
> 243     Total log file I/O reads
> 475089  Current log file number
> 10465668        Current log file offset
> 475089  On-disk log file number
> 10465668        On-disk log file offset
> 4       Maximum commits in a log flush
> 1       Minimum commits in a log flush
> 96KB    Log region size
> 12      The number of region locks that required waiting (0%)
>
>  According to
>
> http://www.cjc.org/blog/archives/2006/08/22/cyrus-imap-log-and-cache-settings/
>
>  I created file
>
> aras at nebka:~/acis/RI/data$ cat DB_CONFIG
> set_cachesize 0 2097152 1
> set_lg_regionmax 1048576
>
>  and run
>
> aras at nebka:~/acis/RI/data$ db4.6_recover -h .
>
> aras at nebka:~/acis/RI/data$ db4.6_stat -l -h .
> 0x40988 Log magic number
> 13      Log version number
> 31KB 256B       Log record cache size
> 0       Log file mode
> 10Mb    Current log file size
> 0       Records entered into the log
> 0       Log bytes written
> 0       Log bytes written since last checkpoint
> 0       Total log file I/O writes
> 0       Total log file I/O writes due to overflow
> 0       Total log file flushes
> 707     Total log file I/O reads
> 475089  Current log file number
> 10465816        Current log file offset
> 475089  On-disk log file number
> 10465816        On-disk log file offset
> 0       Maximum commits in a log flush
> 0       Minimum commits in a log flush
> 1MB 32KB        Log region size
> 0       The number of region locks that required waiting (0%)
>
>  Now shows that the log file size is larger.
>
>  I try
>
> aras at nebka:~/acis/RI$ updareq RePEc /tor/tecipa
>
>  and the log shows
>
> Thu Mar 25 21:29:16 2010  request:
> source: /home/aras/acis/bin/updareq [32355]
> collection: RePEc
> update: /tor/tecipa ()
> processed 'tor/tecipa/', found: out1.rdf
> Thu Mar 25 21:29:17 2010 processed /tor/tecipa in RePEc
> Thu Mar 25 21:29:17 2010 time:  1 wallclock secs ( 0.26 usr +  0.03 sys =  0.29 CPU)
>
>
>> Can someone explain what is wrong?
>
>  You are not a computing genius. Tonight it's my night.
>
>
>
>  Cheers,
>
>  Thomas Krichel                    http://openlib.org/home/krichel
>                                http://authorclaim.org/profile/pkr1
>                                               skype: thomaskrichel
>



More information about the RAS-run mailing list