[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: New package managment (fbsd ports)
On 30-Sep-99 Steve Baker wrote:
> Erik wrote:
>
>> > But this requires that I have already made a "windowmaker" directory and
>> > downloaded the Makefile for it - right? Either that or every distro has
>> > to have directories and Makefiles for ALL the packages there will ever
>> > be.
>>
>> yes. Fortunantly the ports dir is pretty easy to make (I think). the ports
>> collection has a pretty good spread of packages.
>
> That's never going to be acceptable - when a new package is announced,
> people
> want to download it immediately. Nobody is going to want to wait while
> someone updates a centralized /usr/ports directory - and then go to that
> site and download the new /usr/ports, install *that* and only then get
> that
> game!
>
this would differ from debs and rpms how? :) And how about dependancies? if the
pingus autoweb isn't updated iwth the newest version of clanlib referenced,
then it will get an old version of clanlib... RPM's have been critisized
because they have no real central repository, and debian packages aren't
exactly cutting edge... I think it's acceptable, common, de-facto, and
implementing it as a common cvs repository would be a step in the right
direction.
> Also, whichever site maintains that directory is going to feel the
> 'slashdot
> effect' with frightnening regularity.
>
definitly :) that's why I was thinking cvs and distributed repositories stored
at sun and ibm and metalab/sunsite. They got the cpu/mem/bandwidth to cope. CVS
is cool because only the stuff that's changed will be updated. This makes for
much smaller downloads than tarballed package files, a la debian. The biggest
concern I think would be the cpu of the server, as each server will run many
many gzips. Possably the cvs server could be modified to only do up to -z3?
>> If the ports framework is
>> usable by both linux and fbsd, then we have both groups actively working on
>> it.
>> As far as I can tell, there are less active fbsd developers than linux,
>
> (by about two orders of magnitude!)
>
>> and they have a very respectable ports situation. I don't think populating a
>> ports framework with all the fun goodies will be a serious problem :)
>
> I think maintaining it in a timely manner could become a problem - also
> it
> doesn't scale very well. Eventually, that /usr/ports directory is going
> to
> become VERY large! Suppose the whole world uses Linux and Windoze is
> history,
> there could quite easily be a million programs out there that you could
> download.
>
it's not incredibly scalable. There will be a point when any method will fail
due to package number. However the current implementation has 2760 ports I
think and the only place that's starting to look crowded is devel. Development
tools are also where *nix has a huge base of apps... This system won't be
perfect, but it's a step in the rgiht direction imho :)
> You'd need a million directories and a million makefiles in your
> /usr/ports
> area - and to maintain that you'd need to download perhaps a Gigabyte
> from
> the site that maintains /usr/ports. OK, you could organize the
> /usr/ports
> site so that you only grab the parts of that heirarchy that you need to
> build a specific package - but now you've just moved the problem to
> needing
> to get all the parts of /usr/ports that your package needs.
>
debian and suse both weigh in around 2k packages, fbsd at 2700. it's gonna be a
long long long time before anything near 100k can be implemented, much less a
million. I think debian just about hit saturation at 1000 or so, mebbe 1500.
I think fbsd's method can cope with more packages more easily, but I'm pretty
sure 10k will saturation, if not sooner. No one has ever sat up and decided to
do it one way and have that way perfect. It all evolves, in little steps... I
think this is a step in the right direction :)
> There is a political issue too. Suppose I wrote a piece of software
> that
> the maintainer of /usr/ports didn't approve of? Suppose they refused to
> add my package to it for some reason?
>
Suppose the debian maintainers decided not to use your package? suppose the
redhat ppl decided not to? the suse ppl? We put this kind of trust in people
with alterior motives already. If some non-profit commitee was formed, they
could provide some form of quality control.
> I prefer a scheme that's under my own control.
>
me too, I prefer a scheme that's under my control. Here we have a conflict.
>> > I suppose we could make the autoload script create a directory and a
>> > Makefile in the /usr/ports approved way:
>> >
>> > eg windowmaker.al contains:
>> >
>> > mkdir -p /usr/ports/x11-wm/windowmaker
>> > cd /usr/ports/x11-wm/windowmaker
>> > cat >Makefile <<HERE
>> > ...stuff...
>> > HERE
>> > make
>>
>> how will that guess dependancies?
>
> In the Makefile presumably. However the present /usr/ports does it.
>
ports has a variable called DEPENDANCIES in the makefile, yes
> However, the more I think about it, the more I think the scheme I
> outlined yesterday is superior.
>
you only think that cuz it's yours :)
>> I think having a human maintainer in the works somewhere would be best.
>
> I think that's the biggest flaw!
>
I don't know of anyone who can code program so robust as to never ever ever
need any kind of human intervention... And if there was no human intervention,
some numbnut would try to slip in something to exploit the automations.
>> If this becomes semi-standard, then someone
>> could crop up some easy documentation on how to make a port framework and
>> hopefully developers themselves (who usually have a pretty good idea of
>> dependancies and what the newest version is...) will actively maintain ports
>> for their projects. Make several 'central cvs repositories' that are chained
>> to
>> balance load, and updating the ports heirarchy is as easy as a cvs update.
>
> You'd give CVS write-access to the /usr/ports server to just anyone?
>
hell no, if there's a commitee, then they submit the port updates or new ports
to the commitee, who in turn adds it. There are what, 2 ppl running freshmeat?
They seem to cope with quite a bit of work fairly well. I think there're 3
people doing slashdot? they even verify that the stories are worth giving a
rats ass about (sometimes)
> Yikes!
>
>> > wget seems a pretty solid tool for this kind of thing. It beats any kind
>> > of
>> > FTP-like tool because it knows how to get things via http as well as
>> > ftp.
>> >
>>
>> wget is impressive, but not omnipresent just yet.
>
> Hmmm - well perhaps not.
>
>> But it's very small, so I
>> wouldn't be opposed to having that handle downloading packages.
>
> Certainly each autoload script could check for wget's existance and
> patiently explain how to (manually) download and install it.
> Alternatively,
> I suppose we could use ftp to download it if it's absent. That's bad
> news from a portability point of view though because not all versions of
> ftp will work from command line options.
>
> If the autoload mechanism ever became popular, wget would appear on
> distro's
> pretty soon.
>
>> A wrapper
>> script with some exception handling should be implemented to deal with host
>> name lookup failures, route failures, down machines, moved packages, busy
>> servers, stoned servers, etc
>
> Yep.
>
>> If a cvs network is the way to go (and I feel very strongly that it is), I
>> don't think we'll have much problem finding high speed hosts. I bet various
>> metalab/sunsite places will agree, companies with vested interest in the
>> free
>> *nix communities may agree if approached (ibm, sun, sgi, etc).
>
> But that's *so* much more complex than the autoload mechanism.
>
yes, it's a little more elaborate than a kludge :) It would unify *nix's who
port the generalized 'ports' method in, also. It'd be a change from several
small tribes to one unified effort. And when it's outlived its usefulness,
it'll be tossed away for the next new thing.
If the autoweb way gets implemented instead of the ports way. And a program
calls for libblah, and automatically downloads and installs version x of
libblah. Then another program needs libblah of version y, what happens? does it
know that a different version was installed? does it upgrade, or attempt dual
residance? Does it install the new one partially over the old one, breaking the
first program? How do you enforce sane dependancy checking?
> --
> Steve Baker http://web2.airmail.net/sjbaker1
> sjbaker1@airmail.net (home) http://www.woodsoup.org/~sbaker
> sjbaker@hti.com (work)
>
-Erik <br0ke@math.smsu.edu> [http://math.smsu.edu/~br0ke]
The opinions expressed by me are not necessarily opinions. In all
probability, they are random rambling, and to be ignored. Failure to ignore
may result in severe boredom or confusion. Shake well before opening. Keep
Refrigerated.