[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Tools
On Sun, 9 Jan 2000, Erik wrote:
> I haven't read his work, but I've heard people complain that his solution to
> everything is assembly...
I can understand that. I suppose the divider would be just how important
runtime performance is to you, versus other fair considerations like
portability, natch.
> Don't let this stop you from learning assembly, tho. :) A good background in
> assembly often makes a better C programmer, as you understand how the machine
> really works at a byte level.
I imagine it would help there. For the very best efficiency, it appears,
that such understanding is essential. Hence tail recursion in lisps.
> Write the code in a higher level language, like C or C++. Then profile it. If
> you notice something simple that's getting a lot of activity and stealing a lot
> of CPU time, optimize that in the higher level language as much as you can. If
> it's STILL not satisfactory, and you think you see a perfect way to do it in
> assembly, comment out the code and write an assembly version and see if that
> makes a real difference. If it does, then leave it assembly, leave the C
> comments, and maybe do some #ifdef uglieness to provide portability. Most of
> the time, good C code will be competitive with, if not outrun the assembly. The
> people who write good assembly of any scale are really wizards at their art
> imho :) (plus you want the C there for readability, or if someone wants to
> compile it on a different arch but doesn't know assembly for that arch/os, they
> can comment out the asm and uncomment the C and have it functional. Better yet,
> use an #ifdef so the configure script can pick that up...)
A good methodology, thanks. Though I should think that good assembly
programming, where the human can write code that runs as fast as the
compiler's, can be learned. A compiler outruns human assembly when the
scale of the problem stretches for an entire modern application, as human
programmers get bored and distracted and the rote machine application of
the compiler's rules goes on and on. We may excel at Go but fail at
Chess. For the appropriately short routines worth optimizing, I'm
confident we can code better than the compiler.
> C64 had a bunch of different "monitors" which gave you opcode access to the
> runtime memory. I thought warpspeed had the best, tho fastloads was workable :)
> that was cool sh*t in the 80's. I don't know if I'd suggest doing realtime
> direct memory munging on a "modern" os. I know there were plenty of "oopses" on
> my c64's and c128 where I'd have to reset the 'puter. You really couldn't do
> any damage from locking the os or blowing something up on those, except maybe
> lose data. An oops on a machine with a hard drive or a multi-user os can make a
> big mess. If you do direct memory munging, make sure you're NOT root... I
> beleive gdb can do direct memory shtuff. It reads with
> "disassemble <begin> <end>", but I don't know if it can assembly, too. :/
>
> gcc -S something.c <-- will produce something.s which is the assembly for
> the C program
> gcc will also assemble .s programs, acting as a frontend for as86 or gas I
> beleive
Oh no, I wouldn't try to write assembly for Linux as if it were a
Commodore! I've read clearly enough about the pitfalls of old demo
coders. I'm just trying to get an interactive debugger to work well
enough so I can go through the examples in my x86 assembler book, which
was written expecting the user to use dos debug.
I do appreciate the tip for producing assembly code from gcc.
Ben Taylor