|Date: ||Sat, 28 Jun 2003 11:05:35 -0700|
|Reply-To: ||Annie Chang <chang5a@YAHOO.COM>|
|Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>|
|From: ||Annie Chang <chang5a@YAHOO.COM>|
|Subject: ||Why does performance vary so much?|
|Content-Type: ||text/plain; charset=us-ascii|
Thanks very much for many good advice to help speed up
my sorting 10 million record issue. I tried a few
alternative and found a most reliable way to reduce
the time, in my case, is to cut the large dataset to a
few small datasets and then merge them back. And I'll
explain why I call it "reliable way" below:
As you may imagine, I use the dataset for many other
analysis. One puzzling observation is why the SAS
performance in terms of processing time can vary so
much. For a simple procedure, say, a index creation,
or regression by a group variable, the time can vary
from about 40 minutes, to 12 hours! I understand if
the speed can be affected by factors such as how much
other work I use the computer for, say, checking
emails in the same time, etc, but the variation is too
large to be explained by these contributing factors.
Does anyone have some insights why this is so? Unlike
a sort problem I can always try to find a better way
to do it, this variation issue seems harder for me to
deal with, since a seemingly perfect code can take
magnitude longer simply I use it at a different time.
BTW, if it doesn't cause you a lot of trouble, I'd
prefer any of you who can share your wisdom to cc to
my email address. It's not very convenient for me to
go to SAS archive to extract related reply, besides,
I'd save them for later use.
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!