|Date: ||Mon, 24 Aug 1998 17:18:16 -0700|
|Reply-To: ||"Self, Karsten" <kself@VISA.COM>|
|Sender: ||"SAS(r) Discussion" <SAS-L@UGA.CC.UGA.EDU>|
|From: ||"Self, Karsten" <kself@VISA.COM>|
|Subject: ||Re: FTP > 2GB ?? (was RE: Unix -- large files, pipes,
and network s)|
|Content-Type: ||text/plain; charset="iso-8859-1"|
Responding to Jack Shoemaker's post, the local host is Solaris 5.6 (64
bit), the remote is 5.5.1 (32 bit) -- which means my local is large file
capable but not the remote. This might have something to do with it. I
haven't tried a loopback ftp (ftp to local machine) on the Solaris 5.6
box, but will see if this fixes anything. If this is the case then I
suspect the ftp daemon on the remote (32 bit) host.
I've been combing the ftp and ftpd man pages for any hints on large file
capabilities. I'd like also to post this to a solaris newsgroup for
some feedback there, can't from work.
I've confirmed the problem by attempting to FTP two files totaling more
than 2 GB via standard input. The destination file FIFO is a named FIFO
pipe which is being read directly, in this case by "cat FIFO >
/dev/null", so that no data are actually written to the remote host.
user <userid> <password>
put - FIFO
`cat BIG1 BIG2`
" | ftp -niv
Files BIG1 and BIG2 are just under 2GB. This fails with the same
message as before. Again, watching and logging filesystem utilization
of /tmp and /var shows no appreciable use on either the local or remote
A packet limitation doesn't quite make sense as packets will be some
number of bytes n >= 2 (I don't know what the standard networking packet
size is). These seems to be strict byte count.
Karsten M. Self (email@example.com)
What part of "Gestalt" don't you understand?
> From: Frank Schiffel[SMTP:SchifF@mail.health.state.mo.us]
> Sent: Friday, August 21, 1998 6:45 AM
> To: kself@VISA.COM
> Subject: Re: FTP > 2GB ?? (was RE: Unix -- large files, pipes,
> and networks)
> This isn't a solution, but might be a thread that solves the problem
> somehow. when I worked in DoD, we were needed to have a 1Gb/sec
> throughput realtime analysis of telemetry data. it seems that both the
> Mac and IBM PC software have some (or at least had) a 2 Gb limit on
> transmitting and receiving data. this was some sort of 'packet size'
> type limitation. you might have to talk to the people who are running
> the system you're working SAS on to see what they have. Seems that the
> airline reservations people had to work around this limitation.
> Frank Schiffel
> Research Analyst III
> Bureau of Health Resource Statistics
> State Center for Health Statistics
> 920 Wildwood Drive
> PO Box 570
> Jefferson City Missouri 65102-0570
> (573) 751-6279
> FAX (573) 526-4102
> visit our website at: http://www.health.state.mo.us/
> >>> "Self, Karsten" <kself@VISA.COM> 8/20/98 5:29:11 PM >>>
> I posted a very clever solution to a problem involving data access
> across a network yesterday. There's only one small niggling little
> issue with the method proposed:
> It doesn't work.
> It *does* work with a subset of data (tested up to 80k obs). However
> when I launched the production version of the job (reading all 40 GB
> data), SAS stopped processing on an error after about 4 minutes and
> shell script generated a "no space left on device" error. Subsequent
> tests watching filesystem utilization on both the sending and
> hosts showed no appreciable disk usage in either /tmp, /var, or
> /var/spool (the typical buffer storage directories).
> I suspect there is an issue involving FTP of large files (> 2GB), in
> which FTP loses track of data transmitted (which it tracks for
> accounting and CRC check purposes). The issue appears to have
> again in transmitting a 4GB file via FTP (no SAS or other complicating
> factors involved) in which only 2GB were actually transmitted.
> Has anyone experienced/confirmed this problem? Do you have a
> workaround, explanation, or a fixed version of FTP which will work on
> Solaris 5.5.1?