Jump to content
Muxe Inc Forums
Sign in to follow this  
FreeDOSfan

What programmimg language do you use for NDN?

Recommended Posts

FreeDOSfan    0

What programmimg language do you use for NDN?

 

Is it TMT PASCAL ? Where can I get it legally ?

TMT implies that it is for teachers only - and I am NOT a teacher by now . :angry:

Share this post


Link to post
Share on other sites
FreeDOSfan    0
NDN is developed with Virtual Pascal

 

Thanks.

 

which is dead by now unfortunately.

But it is very fast, produces very fast code and is very stable.

 

Does this (fast & stable) apply to the DOS (DPMI32) target ?

It is a far as I know the only ONE prog written in 32bit PASCAL.

It the DOS version of NDM still alive ? The name suggests "NO"

(Necro=Death :D).

 

a lot of it's code is asm, the regex library and context menu code is in C

 

"regex" & "context menu" does this apply to DOS ??

 

I had a look at the VPASCAL: seems that the DOS target was patched

in later by an external devel, and it works, while FreePASCAL started

on DOS and is rather unusable.

 

:wacko:

Share this post


Link to post
Share on other sites

hi FDf!

 

it applies to the code generated by VP and it's working in Windows/Linux

unfortunately, i don't have a working DOS executable of VP

 

but, i can compile and test in windows wfor DOS /o problems

 

and all 3 targets are alive (D32, W32, LNX)

 

the regex and context menu code is independent of dos/windows

(of course, context menu only works in windows)

it has nothing to do with VP itself

 

i don't know about FP

all i know is that it is huge

 

hope this helps ( © Dandv )

Stefan / AH

Share this post


Link to post
Share on other sites
GPFault    0
Virtual Pascal is DEAD!!!

I 100% agree

 

VP support for DOS/windows/OS2(?) is well enough, but Linux, hm.... IMHO vp linux support is extremly ugly and incomplete...

Yes, the current linux version is not bad, but I think, that fully usable linux version can`t be created with vp.

The main reason is: vp seem not support dynamic linking for linux, and many standard libs functions are reimplemented at kernel level (with int 80)

 

So if full linux support is not one of the main lines of development vp is ok, but in other case vp isn`t.

 

 

(all text after here is only IMHO)

 

Also I 100% agree with

this post

There is many file managers for Windows. Generally, I think, Far is not much better than NDN, NDN is not much better than Far.

For me NDN is a little better than Far, so I use NDN. If there wasn`t be NDN i had to use far in such case, it wouldn`t be a big problem.

On Linux i have no good altenatives(mc isn`t), so i need use NDN, but now it has too much small bugs. Most of them are reasoned by the fact that firstly DN was old dos program and now it emulate DOS api on all OSs (not everywhere, but in many parts of code).

Some things like attributes can be fixed, but some modern(ok, they were modern in 1992(?) when dn was born) technologies such as multithreading(background copy, calculation background completions of filenames in background), UTF-8 need rewriting of >>50% code.

 

So the questions are:

2Stefan: Is good linux support among of the 2-3 main lines of development?

 

2All: Is here anybody except me who wants Linux version more than Windows one?

 

P.S. I am NOT a big linux fan (now :) )

Share this post


Link to post
Share on other sites
I 100% agree

yes - it's a fact

 

VP support for DOS/windows/OS2(?) is well enough, but Linux, hm.... IMHO vp linux support is extremly ugly and incomplete...

Yes, the current linux version is not bad, but I think, that fully usable linux version can`t be created with vp.

The main reason is: vp seem not support dynamic linking for linux, and many standard libs functions are reimplemented at kernel level (with int 80)

the support is incomplete, but not ugly - it is the LINUX OS that makes things ugly

 

yes, i reimplemented some code for exactly one reason:

because the linux internals change a lot over the years, and, with FTP f.ex., i was not sure if i should rely on LINUX C API

at the moment the sockets actually do use the C code from libc.so.6

(most of them)

 

also: isn't this dynamic linking?!

 

So if full linux support is not one of the main lines of development vp is ok, but in other case vp isn`t.

(all text after here is only IMHO)

it is one main goal and VP is good enough for this too

 

Also I 100% agree with

this post

There is many file managers for Windows. Generally, I think, Far is not much better than NDN, NDN is not much better than Far.

For me NDN is a little better than Far, so I use NDN. If there wasn`t be NDN i had to use far in such case, it wouldn`t be a big problem.

i would have to retranslate the post... i don't think it was that much important

 

On Linux i have no good altenatives(mc isn`t), so i need use NDN, but now it has too much small bugs. Most of them are reasoned by the fact that firstly DN was old dos program and now it emulate DOS api on all OSs (not everywhere, but in many parts of code).

Some things like attributes can be fixed, but some modern(ok, they were modern in 1992(?) when dn was born) technologies such as multithreading(background copy, calculation background completions of filenames in background), UTF-8 need rewriting of >>50% code.

background work can be done w/o problems:

 

1. the TGROUP object actually is a simple multithreading container

the new version will feature background copy/deletion

also, don't forget that FILE FIND alraedy is working in the background

 

2. the VP RTL features internal mutlithreading too

 

i don't know where to go with UTF-8

do we really need it, what do we need to support it...

 

i'd rather rewrite 50% of current code than to port the whole program to FP or into C

btw, 50% is a bit too much, or did you only mean the VP RTL?

 

So the questions are:

2Stefan: Is good linux support among of the 2-3 main lines of development?

 

2All: Is here anybody except me who wants Linux version more than Windows one?

 

P.S. I am NOT a big linux fan (now :) )

moving from VP to any other compiler will take NDN MONTHS of work without ANY improvments,

until it is as bug free as at the moment

and then we won't know if the new compiler will really help us, or introduce us to new problems

 

i always enjoy doing ground/basic work concerning the VP RTL, which involves a lot of reading and trial and error on the OS/KERNEL front

 

i also read the FP RTL for some help, and, believe me, it IS damn ugly too

 

good linux support IS possible with VP

 

best wishes

Stefan / AH

Share this post


Link to post
Share on other sites
GPFault    0

OK. Sorry, if my previous post was too critical, I need to think on much of it again.

the support is incomplete, but not ugly - it is the LINUX OS that makes things ugly

 

yes, i reimplemented some code for exactly one reason:

because the linux internals change a lot over the years, and, with FTP f.ex., i was not sure if i should rely on LINUX C API

int 80 is absolutely non-stndartized and really changes over years, while LINUX C API is fixed and never change.

 

[very imho]

the only non-ugly way to communicate with Linux kernel is using statically or dynamically linked libc.

using int 80 intead of libc is like using ntdll.dll instead of kernel32.dll

I see no way to change this situation mostly because of vp.

[/very imho]

 

at the moment the sockets actually do use the C code from libc.so.6

(most of them)

 

also: isn't this dynamic linking?!

 

Sorry, i didn`t look in sockets before your post. If there is really dynamic linking that works - it`s good

but ldd says

$ldd ndn

not a dynamic executable

 

(???)

and pe2elf is really strange tool...

 

also, don't forget that FILE FIND alraedy is working in the background

such background processing in DOS is very good, but in modern multithreading OS background processing without inkernel threads is slow and imho ugly.

 

 

i don't know where to go with UTF-8

do we really need it, what do we need to support it...

 

i'd rather rewrite 50% of current code than to port the whole program to FP or into C

btw, 50% is a bit too much, or did you only mean the VP RTL?

 

when i say "UTF-8 support" i mean all strings inside NDN to be in UTF-8 (which is variable bytes-in-character encoding). Variable bytes-in-character lead to breaking most of TVision. So, speaking really, it is impossible.

Converting to/from UTF-8 in interface with terminal and/or filesystem imho is ugly and lead to lot of bugs.

 

i would have to retranslate the post... i don't think it was that much important

here is the meaning of part i agree (not strict translation, but meaning is mostly the same):

 

IMHO ndn developers should pay attention mostly to linux not to Windows. There are many file managers for Windows that are good. NDN is as good as Far, but it is nearly impossible to create a filemanager much better than Far. But on Linux platform there is no good console filemanagers and NDN have a chance to became the Main console fm for linux.

Share this post


Link to post
Share on other sites

Hi Vasily!

 

OK. Sorry, if my previous post was too critical, I need to think on much of it again.

no no, it's really ok

 

int 80 is absolutely non-stndartized and really changes over years, while LINUX C API is fixed and never change.

 

[very imho]

the only non-ugly way to communicate with Linux kernel is using statically or dynamically linked libc.

using int 80 intead of libc is like using ntdll.dll instead of kernel32.dll

I see no way to change this situation mostly because of vp.

[/very imho]

LINUX C API SHOULD never change, but if you look through all the APIs you can also find quite some

deprecated APIs. can we rely on the different versions of libc.so.x ?

 

look at \vp\lib.lnx\defs\libc_5.def, there you can add all lib calls you want,

like in \vp\lib.w32\defs\win32.def

but i haven't checked it out myself yet

 

one problem i see is: what happens if we need libc.so.6 and there will be a libc.so.7 in a new linux release?!

is it safe to create a link to libc.so.7 with the name libc.so.6?

 

at the moment i don't believe that there's any int 80h call that has to do anything with the current problems

 

Sorry, i didn`t look in sockets before your post. If there is really dynamic linking that works - it`s good

but ldd says

$ldd ndn

not a dynamic executable

 

(???)

and pe2elf is really strange tool...

all i can say is that it works :)

 

such background processing in DOS is very good, but in modern multithreading OS background processing without inkernel threads is slow and imho ugly.

only the D32 RTL has built in thread support

OS2, W32 and LNX use system/kernel threads

 

when i say "UTF-8 support" i mean all strings inside NDN to be in UTF-8 (which is variable bytes-in-character encoding). Variable bytes-in-character lead to breaking most of TVision. So, speaking really, it is impossible.

Converting to/from UTF-8 in interface with terminal and/or filesystem imho is ugly and lead to lot of bugs.

there's no reason IMO to do EVERY string in unicode

for speed reasons alone and to support older OS's and PCs i will never do that

 

that's what the RTL is there for

we detect the OS, we need to know what to do with it

 

i tihnk it can be done, if we need it at all

 

here is the meaning of part i agree (not strict translation, but meaning is mostly the same):

 

IMHO ndn developers should pay attention mostly to linux not to Windows. There are many file managers for Windows that are good. NDN is as good as Far, but it is nearly impossible to create a filemanager much better than Far. But on Linux platform there is no good console filemanagers and NDN have a chance to became the Main console fm for linux.

i partially agree

 

*I* have to focus on all 3 ports, because i use all of them and i don't want to drop any port

 

one problem is that i don't have that much time to focus on LNX,

because i have a big todo list of features and bugs to solve, not only focused on LNX

 

another problem is linux itself, which is so much different from DOS/W32/OS2

 

i have worked on NDN/LNX for much too long time to drop it now

even if it will progress slowly, i will continue working on it, with VP

 

best regards

Stefan / AH

 

PS: i have succeeded in making a D32 version of VPC.EXE, which runs in pure DOS with the help of the HD-DOS Extender. Just in case someone might want to develop directly in DOS.

Share this post


Link to post
Share on other sites
FreeDOSfan    0
Virtual Pascal is DEAD!!!

 

Maybe :(

 

May Be we will be FreePascal compiler?

 

Please stay with VP. DEAD ? No problem as long as it works.

 

FreePASCAL ? The DOS version is really CRAPPY :angry: :( ,

using FP would kill the DOS version of NDN.

 

PS: i have succeeded in making a D32 version of VPC.EXE, which runs in pure DOS with the help of the HD-DOS Extender. Just in case someone might want to develop directly in DOS.

 

A D32 version running with HX-DOS or a WIN32 version ???

 

HX should run the WIN32 of VP version in DOS ... the D32 version of NDN

does NOT require HX. Could you please post more details ? I tried

to install VP some time ago and got it working, but failed to install

Veit K.'s additional files.

Share this post


Link to post
Share on other sites

hi FDF!

 

sorry, i was not expressing myself clear enough:

 

of course i got the W32 version of VPC running and compiling the VP-RTL in DOS (DOS 7.0 of W98) with HX.

but the problem is that it hangs when i try to compile the NDN sources.

i didn't have much time yet to check what exactly hangs VPC.

maybe it's a little bug in the HX WINAPI library

i think it can be "easily" found using the HX debugger, but i can't do that at the moment

i would be very happy if someone else could check this

 

ok, a quick braindump regarding the VP install:

 

- install VP

- install VKs files over it (simply extracting i think)

- run APPLYDIF.EXE from the VP base directory

NOTE: VKs v279 applydif versions crashed on my system(s) (also PE2LE.EXE)

if it happens for you too try the older VP D32 archives from VK and copy/use these exe files, that worked for me

- goto VP\SOURCE\RTL\ and run _all.bat, this compiles the new/updated RTL

 

i hope you can get it to work now

 

bye

Stefan / AH

Share this post


Link to post
Share on other sites

Some opinions about various topics in this thread:

 

About FPC usage:

the Dos port was indeed in decline, even though Dos was at one time the core platform of FPC. However with 2.2.2 the platform will hopefully be closer back to releaseable compability.

 

Even if you stick to VP, getting some FPC testing done might be useful. It improves FPC, and allows to investigate the problems. Specially since its "Vision" part is copyright free. FPC on Linux/windows should generally be way better than VP, except maybe debugging on windows.

 

One of the reasons that the FPC port is in decline is exactly this, when the hordes moved on, Dos usage ground to a halt, and the few remaining don't want to invest anytime anymore, and keep patching old stuff (TP,VP) indefinitely. Which is a pity. At least try to experiment and file bugs a bit. As said the new 2.2.2 release should at least get the dos port closer back to the quality it had during later 1.0 times.

 

If there are issues with the *nix FPC rtl, I'd gladly here them, since I'm the maintainer. Note that FPC made enormous progress after the original postings to this thread, first and for all the entirely redesigned Unix ports. However some of the issues that you (probably) have now (utf-8 and long file names in TV) are there in FPC too. It is unclear what to do about that, upgrade TV to be more Delphi dialect compatible, or keep it the way it is. The point is a bit academic, since nobody is using Turbo Vision under FPC, and it is only minimally fixed from time to time to keep the textmode IDE running.

 

Same with unicode and threads. They are supported on non dos targets, but not (yet) on dos. However these kinds of functionality are implemented using manager records (records with procedure variables) and thus pluggable by the user without even a recompile, at least in principle.

 

About "libc and Linux".

 

The libc interface is not a hard API, with a defined binary interface. This is because it is governed by Unix standards like POSIX, but that is a different way of specifying an API, and being able to use the system compiler is one of the requirements of that way. IOW if you are not the system compiler, you might not be able to interface with a posix interface in the way it is meant to be (read: using OS provided headers).

 

So while the man pages declarations keep the same, types and macro implementations often change. Specially the threading related part generaly breaks every major release, and sometimes in between. The comparison

 

The distro maintainers that keep 3rd party C/C++ packages running are pretty useless for anything outside the basic "configure; make ; make install" run. They are already confused if you use a different C compiler (I can remember problems enough with packages using TenDRA, lcc or pgg)

 

So IMHO the arguments about Libc linking in this thread are slightly naieve. Show me a nineties shared linux binary that is still working without compability settings or libs, and I'll show you my 1998 FPC binaries. Even statically linux binaries using libc probably won't work anymore.

 

Sockets are btw perfectly doable using kernels. However DNS resolving, and some forms of unicode use are things better left to the OS (this because resolving is pluggable in libc for directory systems)

Share this post


Link to post
Share on other sites

Hi Marco!

 

Some opinions about various topics in this thread:

Always good to hear!

 

About FPC usage:

...

Even if you stick to VP, getting some FPC testing done might be useful. It improves FPC, and allows to investigate the problems. Specially since its "Vision" part is copyright free. FPC on Linux/windows should generally be way better than VP, except maybe debugging on windows.

 

One of the reasons that the FPC port is in decline is exactly this, when the hordes moved on, Dos usage ground to a halt, and the few remaining don't want to invest anytime anymore, and keep patching old stuff (TP,VP) indefinitely. Which is a pity. At least try to experiment and file bugs a bit. As said the new 2.2.2 release should at least get the dos port closer back to the quality it had during later 1.0 times.

 

If there are issues with the *nix FPC rtl, I'd gladly here them, since I'm the maintainer. Note that FPC made enormous progress after the original postings to this thread, first and for all the entirely redesigned Unix ports. However some of the issues that you (probably) have now (utf-8 and long file names in TV) are there in FPC too. It is unclear what to do about that, upgrade TV to be more Delphi dialect compatible, or keep it the way it is. The point is a bit academic, since nobody is using Turbo Vision under FPC, and it is only minimally fixed from time to time to keep the textmode IDE running.

 

Same with unicode and threads. They are supported on non dos targets, but not (yet) on dos. However these kinds of functionality are implemented using manager records (records with procedure variables) and thus pluggable by the user without even a recompile, at least in principle.

Yes, UTF is unsolved at the moment, but I really never tried to do anything about it.

No known problems with filenames though.

 

But the main problem here is limited time:

I don't want to spend or waste time on a not yet necessary compiler change.

As I described above, it would take months just to get NDN back to where it is right now.

 

The TV code was extended a lot.

And my main concerne is the OS dependend RTL code:

Every RTL of the 3 VP targets was highly improved and rewritten.

I would have to spend a long time, doing the same to another RTL,

until it behaves like it should.

 

As my todo list is quite long, I rather spend my time on bugfixing and new features,

until the day will come when VP is not working anymore for NDN...

 

About "libc and Linux".

 

The libc interface is not a hard API, with a defined binary interface. This is because it is governed by Unix standards like POSIX, but that is a different way of specifying an API, and being able to use the system compiler is one of the requirements of that way. IOW if you are not the system compiler, you might not be able to interface with a posix interface in the way it is meant to be (read: using OS provided headers).

 

So while the man pages declarations keep the same, types and macro implementations often change. Specially the threading related part generaly breaks every major release, and sometimes in between. The comparison

 

The distro maintainers that keep 3rd party C/C++ packages running are pretty useless for anything outside the basic "configure; make ; make install" run.

 

So IMHO the arguments about Libc linking in this thread are slightly naieve. Show me a nineties shared linux binary that is still working without compability settings or libs, and I'll show you my 1998 FPC binaries. Even statically linux binaries using libc probably won't work anymore.

 

Sockets are btw perfectly doable using kernels. However DNS resolving, and some forms of unicode use are things better left to the OS (this because resolving is pluggable in libc for directory systems)

Nice that you share my point of view. :)

At the moment I don't know of any problem with the kernel calls in NDN.

So, I also should not worry about this either.

 

Stefan / AH

Share this post


Link to post
Share on other sites
Yes, UTF is unsolved at the moment, but I really never tried to do anything about it.

No known problems with filenames though.

 

(Slowly, the unices are adopting UTF-8 consoles)

 

But the main problem here is limited time:

 

Yes, I know the feeling.

 

I don't want to spend or waste time on a not yet necessary compiler change.

As I described above, it would take months just to get NDN back to where it is right now.

 

True. But if you don't start modest preparations, sooner or later you'll hit a brick wall. And workarounds cost a lot of time also.

 

As my todo list is quite long, I rather spend my time on bugfixing and new features,

until the day will come when VP is not working anymore for NDN...

 

Ok. Well, as long as you realize that, it's fine.

 

Still it is unfortunate. As said before the decline of the older targets (Dos, OS/2 and Amiga to a lesser degree) is mostly due to

similar reasoning of all involved users. (and a bit the fact that all FPC devels moved on, some already adopting Delphi at work when it first

came out)

 

The TV code was extended a lot.

And my main concerne is the OS dependend RTL code:

Every RTL of the 3 VP targets was highly improved and rewritten.

I would have to spend a long time, doing the same to another RTL,

until it behaves like it should.

 

Partially yes, while the FPC RTL is much more battletested on non-dos (win32/64, *nix, OS X), it is much less a

Dos emulation like VP's is, due to the larger number of platforms, and a much,much higher percentage of

users whose current development efforts don't date back to Dos TP times.

 

Moreover, mutating the RTL too heavily would just move you from one island (VP) to the next ( a specific FPC version, frozen in time)

 

Still it is a pity that nobody won't even try, because we ourselves don't know the exact situation of the older FPC ports. How much of it is

simple some minor attrition, and how much is fundamental. For that you really need true users, that make detailed reports with narrowed down tests etc.

 

And to be honest, if no users wants to invest in Dos anymore, I'm having doubts investing in it as a FPC developer too. Maybe it is time to let it die.

Share this post


Link to post
Share on other sites
Partially yes, while the FPC RTL is much more battletested on non-dos (win32/64, *nix, OS X), it is much less a

Dos emulation like VP's is, due to the larger number of platforms, and a much,much higher percentage of

users whose current development efforts don't date back to Dos TP times.

Here's the first problem I see (if understand you correctly):

NDN *is* a DOS program. How many TP compatible RTL calls would I have to replace

with FP compatible ones?

 

Moreover, mutating the RTL too heavily would just move you from one island (VP) to the next ( a specific FPC version, frozen in time)

I don't want to call it mutating. But from my point of view all RTL targets should behave the same

(if possible). So yes, the VP W32 and LNX RTL had to be modified, because they were not written

with that in mind.

I wonder what RTL comparison results I would get with FP.

 

Still it is a pity that nobody won't even try, because we ourselves don't know the exact situation of the older FPC ports. How much of it is

simple some minor attrition, and how much is fundamental. For that you really need true users, that make detailed reports with narrowed down tests etc.

 

And to be honest, if no users wants to invest in Dos anymore, I'm having doubts investing in it as a FPC developer too. Maybe it is time to let it die.

Well, I asked myself the same all the time the past 6 years.

Who cares for NDN? Or how many?

There are some die hard and loyal users which stayed with me over the years.

But there's no really big community - at least not obviously.

 

Still I enjoy working on NDN because there are so many things I can

try out and I can add almost everything to NDN that makes using a

computer worthwhile. Like DOS FTP support.

 

I look forward to 2GB+ support and getting the LNX port back on track.

As soon as I hit plugins it will get really interesting.

Even if NDN is progressing slowly like the past few months,

I still don't have the feeling it's not worth investing more time into it.

 

I don't know all your reasons for working on FP.

If you don't feel it's worth your time then you have to stop it and

find something new.

It will be a mistake to let the DOS port die completely, maybe even

not supporting it anymore at all in future releases.

As much as it was a failure to drop the active VP development,

especially new targets...

 

All the best,

Stefan / AH

Share this post


Link to post
Share on other sites
Here's the first problem I see (if understand you correctly):

NDN *is* a DOS program. How many TP compatible RTL calls would I have to replace

with FP compatible ones?

 

Not that many I think. It is more that the behaviour may differ slightly. Just like e.g. I/O and rules for filenames with a LFN unit are slightly different from original Dos.

 

The points was more that we don't go as far as emulating dos specific quirks in the behaviour of calls like VP does. At least afaik.

 

I don't want to call it mutating. But from my point of view all RTL targets should behave the same

(if possible). So yes, the VP W32 and LNX RTL had to be modified, because they were not written

with that in mind.

I wonder what RTL comparison results I would get with FP.

 

I think in general the effect would be beneficial. You might be able to clean up a lot since the FPC rtl is simply richer, and designed for portability.

 

Well, I asked myself the same all the time the past 6 years.

Who cares for NDN? Or how many?

There are some die hard and loyal users which stayed with me over the years.

But there's no really big community - at least not obviously.

 

Still I enjoy working on NDN because there are so many things I can

try out and I can add almost everything to NDN that makes using a

computer worthwhile. Like DOS FTP support.

 

It works the same for me with FPC. Current FPC is quite fine, but say in the late 2003 timeframe the state was somewhat depressing.

 

I look forward to 2GB+ support and getting the LNX port back on track.

As soon as I hit plugins it will get really interesting.

Even if NDN is progressing slowly like the past few months,

I still don't have the feeling it's not worth investing more time into it.

 

If you plan expansion rather than mere maintenance, I'd look into FPC. It will expand with you, and surprise you occasionally. Like the complete DBF,CHM, various images support.

 

I don't know all your reasons for working on FP.

If you don't feel it's worth your time then you have to stop it and

find something new.

 

I have no problems like that with FPC, but I'm way more ambivalent about the Dos port. The main reason I work on it from time to time is that I have a weak spot for fullscreen textmode programs, and work on the IDE occasionally. And most of the IDE bugreporters are on dos.

 

It will be a mistake to let the DOS port die completely, maybe even

not supporting it anymore at all in future releases.

 

Please provide reasons. Moreover, FPC core didn't let it die, the Dos users did, by not participating in development enough. The FPC core kept it on life support for years, just in the hope that somebody would step up. But at some point attrophy sets in and the result is no longer shippable. That happened during 2.2.0

 

But it is already quite hard to find enough users using and testing to create bugreports, let alone developers. It's the same problem as with VP. You can't just will something alive, somebody with an interest must be there to do the work.

 

But luckily FPC is very maintainable, which is when people (like Tomas and Giullio an year back) invest some time in the dos port, it quickly improves quality again, and probably now it will be ok again for a while. Till attrophy sets in again, or regular patches, testing and (quality) bugreports from dos user start flowing again. It only takes one dedicated person.....

 

Also there was Laaca (from bttr forum) who has posted some significant dos related patches the last two years.

 

Mostly since only the dos-specific parts are to consider, which is in the 290kb magnitude. Excluding graph.

 

As much as it was a failure to drop the active VP development,

especially new targets...

 

What I have seen from it, it was completely sane. VP simply had no development community/culture, and as soon as the major contributors (Vitaly, Alan) dropped out, it was dead in the water. There hadn't been significant progress in the years before the end, and on the compiler the standstill was even longer (both feature and bugwise). Here is what I remember from it:

 

I have been in contact with Alan occasionally since 2000. Iirc the first mail was about how VP handled the Turbo Vision copyright situation which was hurting us bad back then (1999-2000).

 

During 2003 or 2004 Alan offered us a copy of the VP sourcecode, to see if we could use parts. (But not yet the permission to actually do) However the full compiler is in assembler makes it essentially "don't touch". The IDE is hampered by years of workarounds and copyrighted code. The documentation needed commercial tools. Nothing was really directly reusable.

 

While a minor disappointment, that was not that much of a problem for us, the main attractions were more in the OS/2 direction, where VP has been traditionally strong. However FPC OS/2 development slowed down during that same period due to responsible people graduating iirc (and hasn't recovered since), and new OS/2's are virtually non-existant, so there were simply no people to follow that up.

 

So most of the VP code was quite totally unmaintainable and the rewrite to tackle that needed to restart VP development, would have alienated the extremely conservative users. (which were with VP and not FPC exactly because of that conservatism). I never have believed in a succesful restart of VP.

 

An attempt would not survive the two years (or longer) to totally rewrite, cleanup and replace copyrighted parts. (It took FPC two years to just replace TV), and Pascal is not popular enough to survive that.

 

But the biggest problem is that the main attraction of people to VP, the stability would be compromised. So I think it would even not work for the current userbase given enough developers. Somehow the VP users don't see that, that development and stability in one and the same codebase are mutually exclusive.

 

Anyway, I digress, back to the timelime:

 

Personally I would have liked VP to have been put in maintenance mode in 2003-2004 and then direct all future efforts into making migrating to FPC possible. A merge of the projects at arms lengths so to say. I proposed that to Alan then, but he had doubts. Meanwhile heaps of indignant users pledged solemn allegiance on the maillist to start developing VP, and Alan kept VP alive. I (and anybody else who had done any work in large Pascal projects) had serious doubts, since clearly nobody of them had compiler experience, and most didn't even have experience enough to make bugfixes.

 

I didn't want to be a killjoy though, so I helped Noah Silva to backport FPC's sysutils to VP (to fix one of the copyrighted source problems), but that died out because Noah dropped out. Afaik he realised how much of Delphi compability was really missing in VP, and this wouldn't be a onetime effort.

 

The only one who afaik really did something was Veit, who worked quite hard, but he is more a RTL/library maintainer like I am, not a compiler devel. At least not yet.

 

1 1/2 years later nothing had happened effectively, and Alan pulled the plug. He has once made a remark that I probably was right back then proposing to put VP in bugfix only mode in 2003 (roughly the state it is in now), and direct future users and efforst towards FPC.

 

FPC had less to gain than many people think. Essentially we hoped that Veit would cross over, at least partially, and that the understaffed "old" ports Dos and OS/2 would have given a gust of life. Maybe some of the stability freaks could have been used to maintain older FPC versions with bugfixes only to get very stable FPC releases too.

Share this post


Link to post
Share on other sites
I think in general the effect would be beneficial. You might be able to clean up a lot since the FPC rtl is simply richer, and designed for portability.

But it might behave differently than what NDN requires, hence RTL modifications for NDN would maybe be required too.

 

If you plan expansion rather than mere maintenance, I'd look into FPC. It will expand with you, and surprise you occasionally. Like the complete DBF,CHM, various images support.

I do plan expansion.

The possibility to use C libraries actually gives me enough possibilities.

If I cannot use existing code, I will write my own.

 

Please provide reasons. Moreover, FPC core didn't let it die, the Dos users did, by not participating in development enough. The FPC core kept it on life support for years, just in the hope that somebody would step up. But at some point attrophy sets in and the result is no longer shippable. That happened during 2.2.0

What reasons are there to keep an older software/OS alive? Because...

- there are still users out there (esp. concerning DOS)

- it's entertaining to support all possible targets

- keeping it for historical interests and future references, not losing knowledge

 

The question here is, why does a *compiler* actually have to update it's older targets all the time?

From the language point of view there should be no need to change anything target specific

(btw, except for a few glitches the VP language support is good enough for me).

 

Of course, if the RTL is not complete, like missing LFN, then you might need to update it.

But at some point of development it should be possible to drop active development

without making the DOS port unusable in future releases.

 

What makes the FPC core devs having to update the DOS RTL constantly over years?

 

During 2003 or 2004 Alan offered us a copy of the VP sourcecode, to see if we could use parts. (But not yet the permission to actually do) However the full compiler is in assembler makes it essentially "don't touch". The IDE is hampered by years of workarounds and copyrighted code. The documentation needed commercial tools. Nothing was really directly reusable.

I don't agree on the assembler statement, but don't let us start a discussion on that. ;)

 

While a minor disappointment, that was not that much of a problem for us, the main attractions were more in the OS/2 direction, where VP has been traditionally strong. However FPC OS/2 development slowed down during that same period due to responsible people graduating iirc (and hasn't recovered since), and new OS/2's are virtually non-existant, so there were simply no people to follow that up.

FP DOS == OS/2:

Why should newer FP releases suddenly break/lose the OS/2 support?

 

So most of the VP code was quite totally unmaintainable and the rewrite to tackle that needed to restart VP development, would have alienated the extremely conservative users. (which were with VP and not FPC exactly because of that conservatism). I never have believed in a succesful restart of VP.

 

An attempt would not survive the two years (or longer) to totally rewrite, cleanup and replace copyrighted parts. (It took FPC two years to just replace TV), and Pascal is not popular enough to survive that.

Yes, VP as NDN users are pretty conservative. But you get used to it and become careful with

changes and new features. In fact that is the real life situation in professional projects.

 

Personally I would have liked VP to have been put in maintenance mode in 2003-2004 and then direct all future efforts into making migrating to FPC possible. A merge of the projects at arms lengths so to say. I proposed that to Alan then, but he had doubts. Meanwhile heaps of indignant users pledged solemn allegiance on the maillist to start developing VP, and Alan kept VP alive. I (and anybody else who had done any work in large Pascal projects) had serious doubts, since clearly nobody of them had compiler experience, and most didn't even have experience enough to make bugfixes.

I personally would have no problem with a maintenance mode, except for the possibility to add new compiler targets.

 

I didn't want to be a killjoy though, so I helped Noah Silva to backport FPC's sysutils to VP (to fix one of the copyrighted source problems), but that died out because Noah dropped out. Afaik he realised how much of Delphi compability was really missing in VP, and this wouldn't be a onetime effort.

 

The only one who afaik really did something was Veit, who worked quite hard, but he is more a RTL/library maintainer like I am, not a compiler devel. At least not yet.

 

1 1/2 years later nothing had happened effectively, and Alan pulled the plug. He has once made a remark that I probably was right back then proposing to put VP in bugfix only mode in 2003 (roughly the state it is in now), and direct future users and efforst towards FPC.

 

FPC had less to gain than many people think. Essentially we hoped that Veit would cross over, at least partially, and that the understaffed "old" ports Dos and OS/2 would have given a gust of life. Maybe some of the stability freaks could have been used to maintain older FPC versions with bugfixes only to get very stable FPC releases too.

I totally understand you, if I would be maintainer/user of FP I would have hoped the same to happen as you did.

Fact is, it didn't happen.

 

But even if DOS users will send FP bug reports, who will fix them?

In fact you not only need the users, but users who can fix the problems too.

 

I have to repeat myself:

As long as VP is good enough for my future NDN plans, I will keep using it.

If I ever have to switch to FP, I will let you know! :)

 

All the best,

Stefan / AH

Share this post


Link to post
Share on other sites
But it might behave differently than what NDN requires, hence RTL modifications for NDN would maybe be required too.

 

That goes for an updated VP too.

 

I do plan expansion.

The possibility to use C libraries actually gives me enough possibilities.

If I cannot use existing code, I will write my own.

 

Well that is the whole point. Those libs in FPC (and possibly heaps more in the Delphi space) are existing code :-)

 

What reasons are there to keep an older software/OS alive? Because...

- there are still users out there (esp. concerning DOS)

- it's entertaining to support all possible targets

- keeping it for historical interests and future references, not losing knowledge

The question here is, why does a *compiler* actually have to update it's older targets all the time?

 

My remark was purely from the practical side. You have to have something releasable, and you have have something interested (entertained if you will) enough to keep it releasable. Old versions are enough for historical interest, this is about new releases.

 

(why update old targets)

To have bugfixes and other improvements in the project propagate. And to react to changes in the OS and other

dependant parts (in our case: GDB, but also >32-bit filesupport, LFN, sharing etc, sockets) .

 

Keeping a release working is far less work then brushing up a forgotten port and bring it up to workable state

again if something blocking appears. Because then the knowledge must be quickly rediscovered, which is usually hard.

Exactly the same reasoning is why I suggest starting playing with FPC, just to get a feel and acquire some knowledge

at a calm, but steady pace.

 

From the language point of view there should be no need to change anything target specific

(btw, except for a few glitches the VP language support is good enough for me).

 

Releases are never made aimed at just one person. One tries to compile on a network drive, somebody else uses a codepage type not yet supported, sb else wants to copy a DVD image that is >4GB, somebody else uses the IDE in changed resolutions etc. At some point it is just easier to push out the current state, then to decide which "for all platform" fixes to propagate back to the RTL/FCL of some old version. And then I'm not even talking about who is willing to waste time on that.

 

Of course, if the RTL is not complete, like missing LFN, then you might need to update it.

But at some point of development it should be possible to drop active development

without making the DOS port unusable in future releases.

What makes the FPC core devs having to update the DOS RTL constantly over years?

 

This is often assumed, but totally not workable.

 

The constant updaing is fixing bugs, providing support for higher level abstractions that are needed to keep crossplatform stuff like the IDE running.

Note that the amount of fixes are not that high, but they have to be done, or at least tracked. But as much it is also validation of changes to the generic part.

 

I don't agree on the assembler statement, but don't let us start a discussion on that. ;)

 

I program assembler professionally sometimes. I don't mind it, and I'm not an anti assembler fanatic. But there are serious management issues with non-trivial amounts of assembler. Don't believe me, read any software management book. If you don't agree, I'd like to see a ARM port of VP.

 

FP DOS == OS/2:

Why should newer FP releases suddenly break/lose the OS/2 support?

 

Same reasons as with dos. Stopping of maintenance over several years is effective death in a fast moving project (14000 revisions since 2005). Tackling a problem when it occurs, and you can communicate with the person would did it, and influence the design maybe is way more productive.

 

OS/2 is a bigger problem, because it was not even complete.

 

(1)Yes, VP as NDN users are pretty conservative. But you get used to it and become careful with

changes and new features. In fact that is the real life situation in professional projects.

(2)I personally would have no problem with a maintenance mode, except for the possibility to add new compiler targets.

 

(1)

True. But in the professional work, those are often in minimal maintenance only mode. Professionals usually migrate when they are stuck in dead end with a codebase that is still actively maintenance and development.

 

(2)

To be honest, VP is not even in maintenance mode atm. Last version is 4 years old. Yes, maybe a minimal patch set is coming, and that is a good thing, but one minor patch set in 4 years is not even maintenance anymore.

 

(1) I totally understand you, if I would be maintainer/user of FP I would have hoped the same to happen as you did.

Fact is, it didn't happen.

(2) But even if DOS users will send FP bug reports, who will fix them?

In fact you not only need the users, but users who can fix the problems too.

 

(1)

Fact is that effectively the VPascal community evaporated to other languages, except for some tinkerers and some people maintaining aging codebases.

That "defection" is what I wanted to avoid, more than winning souls for FPC. I know I'm automatically suspect because of my FPC involvement, but my involvement in VP has always been about what is best VP's community from my viewpoint.

 

And another important point is that I don't think there was ever a serious chance that VP would be revived in 2004-2005. Though the reality (not even continuous minimal maintenance) was even worse than I expected.

 

(2)

Well, mostly the people fixing FPC dos bugs. In the mergercase that team would have been strengthened with some VP people and intermediate users (quality bugreports and reseach is almost as important as the fix itself). And that resulting "keep dos alive" team would win not having to provide fixes for bugs that are not dosspecific (the bulk)

 

I have to repeat myself:

As long as VP is good enough for my future NDN plans, I will keep using it.

If I ever have to switch to FP, I will let you know! :)

 

I understood you the first time, and repeated myself in explaining that I thought (and think) it is a bad choice, but what can I do? :-)

 

The main thing I don't understand so much is not that people with VP, but more that they don't start testing with FPC in paralel, to keep a bit of continueity if some blocking problems pops up. Because when that happens it is too much change require too fast to migrate to FPC. (and of course FPC then gets the blame of being incompatible,buggy and incomplete).

Share this post


Link to post
Share on other sites
That goes for an updated VP too.

Which I won't use anyway, maybe I will merge some code but that's it. :)

 

(why update old targets)

To have bugfixes and other improvements in the project propagate. And to react to changes in the OS and other

dependant parts (in our case: GDB, but also >32-bit filesupport, LFN, sharing etc, sockets) .

LFN is vital of course.

I am curious: what DOS supports working 2/4GB+ file support?

I now that there is FAT32+, but I don't know if it really works.

 

But there are serious management issues with non-trivial amounts of assembler.

Don't believe me, read any software management book. If you don't agree, I'd like to see a ARM port of VP.

I am not talking about a total rewrite of a software largely written in assembler.

My point is that the argument of such software not being able to be maintained or

updated/improved is invalid, IMHO. There's always someone who can do it.

 

Anyway, your ARM-VP-port argument is unfair. :P

VP probably never was designed to be ported to other OS's or to support non-x86 CPUs.

 

Same reasons as with dos. Stopping of maintenance over several years is effective death in a fast moving project (14000 revisions since 2005). Tackling a problem when it occurs, and you can communicate with the person would did it, and influence the design maybe is way more productive.

OS/2 is a bigger problem, because it was not even complete.

This is what I don't understand, especially concerning a compiler (sorry for repeating myself again):

Software does not need to change low level (OS) code constantly with new features or bugfixes.

If you have to do that then you probably have a code design problem.

Higher level abstractions should work with most new features independently of the target OS.

 

True. But in the professional work, those are often in minimal maintenance only mode. Professionals usually migrate when they are stuck in dead end with a codebase that is still actively maintenance and development.

Yes, because there's no time to waste.

There's no good argument that explains to your boss or customers why you have

to "waste" quite some time just to use another development software.

Especially if there will be no "visible" improvment after the software switch.

Share this post


Link to post
Share on other sites
Which I won't use anyway, maybe I will merge some code but that's it. :)

 

I already guessed that.

 

LFN is vital of course.

I am curious: what DOS supports working 2/4GB+ file support?

I now that there is FAT32+, but I don't know if it really works.

 

There are some with TSRs. But it was more meant relative to TP, less to VP which

I assume supports TSRs under windows.

 

I am not talking about a total rewrite of a software largely written in assembler.

My point is that the argument of such software not being able to be maintained or

updated/improved is invalid, IMHO. There's always someone who can do it.

 

Sure, just like somebody COULD port it to ARM if he invested enough time. The question is if it is sane.

 

Anyway, your ARM-VP-port argument is unfair. :P

VP probably never was designed to be ported to other OS's or to support non-x86 CPUs.

 

I don't see it that way. Sure, it is more extreme, but it demonstrates that the project was not rigged

for portability and modularity. Hindering maintenance and extension.

 

This is what I don't understand, especially concerning a compiler (sorry for repeating myself again):

Software does not need to change low level (OS) code constantly with new features or bugfixes.

 

The OS code plugs into the generic part of the RTL, that is updated/extended etc. Some OS dependant

part doesn't contain proper initialization for the new part and BANG.

 

If you have to do that then you probably have a code design problem.

Higher level abstractions should work with most new features independently of the target OS.

 

That is something from the software development education ivory tower.

Even though it's true, no abstraction that models something real world is ever perfect, specially when large enough.

The abstractions are occasionally adapted, and not every adaption or extension of the abstraction is entirely without code changes in platform dependant code.

(e.g. due to initializations). Typical reasons are OSes and architectures getting added or mutate,

increased Delphi compatibility (which can

be surprising lowlevel), which require a (slightly) different or wider abstraction.

 

And even existing code is not entirely free of change. E.g. the introduction of threadvars required

hunting down places where pointers to global vars might be checked.

 

Moreover, how do you know that the abstraction really saved you without testing it, and you haven't violated it? And then we

were back to the fact that those platforms (OS/2 and Dos) are only stressed around releases and not

continuously during.

 

In the FreeBSD/Linux kernel project, they have a word for what happens with unmaintained code that is not validated as the code

around it changes: BITROT.

 

I don't have any illusion that VP or TP would be very different and break compatibility occasionally if they still were developped actively.

 

Yes, because there's no time to waste.

There's no good argument that explains to your boss or customers why you have

to "waste" quite some time just to use another development software.

Especially if there will be no "visible" improvment after the software switch.

 

Then it is a bad boss. Productivity and risk management arguments should appeal to a boss. Is your boss an ex-salesperson? :-)

And the problem with the customers is why you try to explain that in the first place? They don't have enough context to understand it anyway. Just do the migration and change a few colors. Done.

 

Seriously, enough with the BOFH talk, if this were universially true, why aren't we doing our accountancy records like the Flintstones did? Carving them in stone? It works in principle, and you never have to account for a migration to a new technology. Right! productivity goes down the drain.

Share this post


Link to post
Share on other sites

Hi Marco!

I wanted to end this thread, but I found a few arguments that I had to answer to. :)

 

The OS code plugs into the generic part of the RTL, that is updated/extended etc. Some OS dependant

part doesn't contain proper initialization for the new part and BANG.

So, shouldn't the updated RTL be able to handle the fact that not all targets will

be on the same level? At least there should be no BANG. :)

 

That is something from the software development education ivory tower.

True.

 

Even though it's true, no abstraction that models something real world is ever perfect, specially when large enough.

The abstractions are occasionally adapted, and not every adaption or extension of the abstraction is entirely without code changes in platform dependant code.

(e.g. due to initializations). Typical reasons are OSes and architectures getting added or mutate,

increased Delphi compatibility (which can

be surprising lowlevel), which require a (slightly) different or wider abstraction.

Agreed - I just think that all these reasons should not render a port useless, even if it's not at

the same level as the most popular ports.

 

Moreover, how do you know that the abstraction really saved you without testing it, and you haven't violated it? And then we

were back to the fact that those platforms (OS/2 and Dos) are only stressed around releases and not

continuously during.

That's true too of course, if you break or change an abstraction layer that already worked...

 

In the FreeBSD/Linux kernel project, they have a word for what happens with unmaintained code that is not validated as the code

around it changes: BITROT.

My favourite topic... Bitrot is especially true for Linux and FreeBSD where the "developers" behave like this:

- Source code is the best documentation

- No need to keep APIs or constants "constant" because the source code is available and can be recompiled on any system anytime

- Writing APIs or drivers for freaks not end users

 

But let's not continue too much on this...

 

 

Then it is a bad boss. Productivity and risk management arguments should appeal to a boss. Is your boss an ex-salesperson? :-)

Mine isn't. :P

No productivity at all while the months that will be spent without producing any new value, i.e. feature improvments or corrections.

The risk of breaking existing functionality, esp. in large and/or complex projects. Even more in "dangerous" software projects that

involve machines f.ex.

 

And the problem with the customers is why you try to explain that in the first place? They don't have enough context to understand it anyway. Just do the migration and change a few colors. Done.

The customer might be interested in the reasons for why he get's a software update.

If his system or machine won't work afterwards you might have to explain why.

 

Seriously, enough with the BOFH talk, if this were universially true, why aren't we doing our accountancy records like the Flintstones did? Carving them in stone? It works in principle, and you never have to account for a migration to a new technology. Right! productivity goes down the drain.

High productivity can be interpreted in several ways:

- concerning NDN: many releases with new features and bug fixes each year, introducing less to no new bugs per release, maybe even adding a new OS target someday

- concerning my company: selling many machines each year, with hardly any stop times, producing a lot of material the customer can sell

 

Stefan / AH

Share this post


Link to post
Share on other sites
So, shouldn't the updated RTL be able to handle the fact that not all targets will

be on the same level? At least there should be no BANG. :)

 

No of course not. You also assume that the existing code adheres perfectly to the abstraction

in the first place, which is unlikely after a long gap and minimal fixes.

 

A bang can be simply a compilation failure, or even a fully working port at

an initial glance (with problems showing later). If you have this a couple of times over the

7-8 years that the dosport is unmaintained and only provisionally/minimally fixed in an hurry

just before a release, at a certain point the chance on this is bigger than that it will simply work.

(and of course you don't notice the things that simply keep working, we are already talking about the exceptions to that here.)

 

And that is the bitrot bit. The first change is not the problem, the strain of many changes and

quick fixes over years is. If you have one problem only, you can quickly binary search to the

cause and resolve it. If you encounter many problems just before a release, often the causal

bond between what happened cannot be found so easily.

 

Keep in mind we are talking about 25000 commits from 20-30 persons in 3/4 million of lines of code,

many of which are quite sensitive.

 

Agreed - I just think that all these reasons should not render a port useless, even if it's not at

the same level as the most popular ports.

 

Nearly any solution needs active maintenance. Check with a suite? Somebody has to check the results and add

the relevant tests for new systems. Test a release thoroughly? Need betatesters. Work with the IDE ? Needs devels +testers.

 

That's true too of course, if you break or change an abstraction layer that already worked...

 

This works perfectly indeed. But only in frozen systems that are minorly bugfixed. Not in actively developed systems. Currently

FPC is gearing up to do unicode, and have modes where the default string type is unicode. For DOS there is more work to do than

for other systems, and a lot of existing code needs to be checked for unicode cleanness, who is going to do it?

 

My favourite topic... Bitrot is especially true for Linux and FreeBSD where the "developers" behave like this:

- Source code is the best documentation

- No need to keep APIs or constants "constant" because the source code is available and can be recompiled on any system anytime

- Writing APIs or drivers for freaks not end users

 

I think this is unfair. I don't like the choices they have made either, but they are from a different culture, and there actually were damn

good reasons for them at the time. Attributing them to laziness just shows you know nothing of the topic or culture.

 

I do think *nix development, specially the free ones hasvea tendency to be too evolutionary, and not phase the evolution properly. This is

an opensource disease. But like many open source disease, all are responsible, since there is simply nobody to be found wanting to do

that work.

 

Mine isn't. :P

No productivity at all while the months that will be spent without producing any new value, i.e. feature improvments or corrections.

The risk of breaking existing functionality, esp. in large and/or complex projects. Even more in "dangerous" software projects that

involve machines f.ex.

 

Well, that is the penalty for having been an ostrich for too long. That is a symptom that you have exploited the old codebase too long, and not invested

in improvement/rewrites when you still had the time. I'm in machine manufactoring too (machine vision for the paper and bottle industry), and we always

have a branch of stable software and work

at the same time on the next generation/rewrite, and the change of branches to production is a carefully orchestrated move, where we start with

nearby, trusted clients with less demanding systems, and then slowly roll them out over the whole range.

 

And there is a parallel here with Virtual Pascal. Most of its users have been ostriches too long too, and now must face the whole transition in one

short time, or give up. And that is painful, very painful, and much more painful than needed, but it is their own fault. They have known that there was

a fair chance that VP was doomed since at least 2003.

 

The customer might be interested in the reasons for why he get's a software update.

If his system or machine won't work afterwards you might have to explain why.

 

Working in convoluted codebases carries risk too. Often rewrites are also done to stamp out stuff that doesn't scale.

 

A lot of our reworks have to do to make codebases maintainable with less risk on making mistakes on modification.

Of course this has to be weighted against the likelyhood that there will be significant modification at all. Existing customers all

have their own branches in SVN for this purpose.

 

Everything is always a tradeoff.

 

High productivity can be interpreted in several ways:

- concerning NDN: many releases with new features and bug fixes each year, introducing less to no new bugs per release, maybe even adding a new OS target someday

- concerning my company: selling many machines each year, with hardly any stop times, producing a lot of material the customer can sell

 

NDN: you'll have to go to a new compiler first, and that was what this thread was originally about.

 

workwise: we are in quality control, so we only lower production (at least that is the client's perception) <_<

 

Anyway, I have a 2.4.0 to get out. I hope to have a RC ready this year.

Share this post


Link to post
Share on other sites
A bang can be simply a compilation failure, or even a fully working port at

an initial glance (with problems showing later). If you have this a couple of times over the

7-8 years that the dosport is unmaintained and only provisionally/minimally fixed in an hurry

just before a release, at a certain point the chance on this is bigger than that it will simply work.

(and of course you don't notice the things that simply keep working, we are already talking about the exceptions to that here.)

Ok, I didn't think of BANG like you did.

 

And that is the bitrot bit. The first change is not the problem, the strain of many changes and

quick fixes over years is. If you have one problem only, you can quickly binary search to the

cause and resolve it. If you encounter many problems just before a release, often the causal

bond between what happened cannot be found so easily.

I do understand, your Unicode bit from down below is a good argument for this.

Still I claim that bitrot can be prevented or minimized by carefully planing and updating APIs.

 

Keep in mind we are talking about 25000 commits from 20-30 persons in 3/4 million of lines of code,

many of which are quite sensitive.

We've been talking about big projects all the time...

 

This works perfectly indeed. But only in frozen systems that are minorly bugfixed. Not in actively developed systems. Currently

FPC is gearing up to do unicode, and have modes where the default string type is unicode. For DOS there is more work to do than

for other systems, and a lot of existing code needs to be checked for unicode cleanness, who is going to do it?

You probably expected the next answer from me:

"The ones that implement a feature that has such an impact on the program."

If a target does not support unicode or not yet, make sure it doesn't "bang".

If possible make "fall through" code available that will handle these cases without touching the target system that may be in the

need of a rewrite to support the new feature.

 

Of course, if developers do not see or value the whole project (i.e. each target) then the target or project is lost...

 

I think this is unfair. I don't like the choices they have made either, but they are from a different culture, and there actually were damn

good reasons for them at the time.

Attributing them to laziness just shows you know nothing of the topic or culture.

Who is being unfair now?

The fact that NDN works quite well on GNU/Linux proves that I know more about "them", whatever "topic" and "their culture" than you imply that I know of.

 

I do think *nix development, specially the free ones hasvea tendency to be too evolutionary, and not phase the evolution properly. This is

an opensource disease. But like many open source disease, all are responsible, since there is simply nobody to be found wanting to do

that work.

Yes, that's another thing that I'd like to my bitrot list from my last post:

- Project can be left and unmaintained at any time, no matter if finished or not, since it's open source and someone else can pick it up

An entertaining site on this: http://linuxhaters.blogspot.com/

 

And there is a parallel here with Virtual Pascal. Most of its users have been ostriches too long too, and now must face the whole transition in one

short time, or give up. And that is painful, very painful, and much more painful than needed, but it is their own fault. They have known that there was

a fair chance that VP was doomed since at least 2003.

There's no good moment to replace a complete development system. This is a choice of all or nothing.

 

NDN: you'll have to go to a new compiler first,

...then I should port everything to C/C++ to solve all future problems and make all target operating systems possible.

 

and that was what this thread was originally about.

True.

 

Anyway, I have a 2.4.0 to get out. I hope to have a RC ready this year.

Good luck.

 

Share this post


Link to post
Share on other sites
I do understand, your Unicode bit from down below is a good argument for this.

Still I claim that bitrot can be prevented or minimized by carefully planing and updating APIs.

 

One can. For an year etc. But Dos is already in this state since 2000. If we had implemented your policy FPC would still

be in the same state as 2000.

 

You probably expected the next answer from me:

"The ones that implement a feature that has such an impact on the program."

If a target does not support unicode or not yet, make sure it doesn't "bang".

If possible make "fall through" code available that will handle these cases without touching the target system that may be in the

need of a rewrite to support the new feature.

 

That's possible if you support two targets, and all devels know a bit of both. But if you support 20 this would result in zero features

being added, since nobody simply knows all platforms (and only a handful know about Dos /programming/ anyway).

 

Same with newer targets needing newer GDB versions to debug (like e.g. win64), while there aren't even binaries for dos, let alone

validated FPC bindings.

 

Of course, if developers do not see or value the whole project (i.e. each target) then the target or project is lost...

 

That would mean the targets could hold a stiffling hold on general progress, and nothing would happen anymore. And since Dos lacks almost

everything, it would happen with nearly any change.

 

While it is a target with very, very low usage. (windows or Linux downloads / Dos downloads is like 20000:1).

 

Most devels do really value the whole project and each target, and assist the platform maintainers as much as they can. But you simply

can't force people to spend all their free time for half an year, for a target apparently nobody is interested in anymore.

 

And keep in mind that we are relatively mild already. Most projects have dropped dos 5 years or longer, leaving it to often changing 3rd party builders to try to make something of it.

 

In the last 2 years, win9x is even starting to be phased out. (and actually we have decided to do that too, if we encounter a major problem with it)

 

Who is being unfair now?

The fact that NDN works quite well on GNU/Linux proves that I know more about "them", whatever "topic" and "their culture" than you imply that I know of.

 

(don't take it personally, my discussion style is sometimes a bit direct, but never meant personally)

 

Well, then you should know Unix has no binary tradition at all, and APIs are formulated not binary, but as C headers.

 

I don't like it either, but it is a consistent different approach, and because you (and I) care about binary apis, you can't expect that everybody does.

 

Yes, that's another thing that I'd like to my bitrot list from my last post:

- Project can be left and unmaintained at any time, no matter if finished or not, since it's open source and someone else can pick it up

An entertaining site on this: http://linuxhaters.blogspot.com/

 

That site is pathetic in an adolescent way. Half of the people that erect such sites are using pirated versions of Windows anyway. (with which they more or less prove that Windows is not worth the cost either)

 

They are nothing but disgruntled amateur users which have had a free ride on expensive commercial tools in the past, and now think that because of that they have a _right_) to such support and listening

to their wishes. It doesn't work this way (and Microsoft hardly listens to end users anyway, unless there is a major revolt like the Me or Vista cases. Their agenda is mostly set by big corporate business,

with the knowledge that medium/small business generally follows their lead)

 

Moreover they think can exert pressure that way, but have no clue about the real tradeoffs the real developers face, and don't offer any solutions. They should simply buy a Windows version,

a full Visual Studio suite if they can't handle anything else. Then Microsoft support will tell them to bugger off because they don't have a very expensive support plan, and they'll probably erect a site to daemonize them ;-)

 

There's no good moment to replace a complete development system. This is a choice of all or nothing.

 

The final switch is. But testing and working on migrational aspects bit by bit can go a long way. And that was exactly my point. If you are waiting for the perfect moment without actively planning

a migration, one can wait forever. It is very clearly visible in e.g. parts of the TP crowd that is still waiting for a compiler that mimics I/O ports and direct screenwrites etc, while really looking

openminded at other alternatives would have maybe given a bit more overview, and a realization that the single digit MHz years and dos-only hardware interfacing are behind us. Long behind us.

 

The same for the VP crowd. Always whining about not ready, and never doing something. Always the same entry level questions, and waiting for somebody to do it for them. FPC core is not unwilling

to do something about it, but if you try to deal with them, there is nothing but loose sand.

 

...then I should port everything to C/C++ to solve all future problems and make all target operating systems possible.

 

Good luck :-) I have thought about this myself in the past for work purposes, but the problem is that I haven't really seen a system/lazarus like FPC for C/C++ that has a balanced approach to multiplatformness. Usually it is heavily biassed towards *nix, and a huge rift between the unix centric users and the dos/OS/2/ Windows users. OS X again is totally different with its Objective C approach.

 

And since in most scenario's, Windows is the majority platform, it simply doesn't do to have only proof of concept software there. C++/QT comes closest, but I prefer my widget sets a bit more open to working around problems with API calls.

Or maybe Java for purely desktop cases. But both are compromises I don't really like. It helps that at least the non-OS dependant libraries (like Boost) are getting more standarized now. My Borland C++ (2009 edition) came with boost.

 

In short, I decided to postpone multiplatform (at work) as long as possible, and try a Delphi/Lazarus combination if I can't avoid it anymore.

 

Good luck.

 

RC1 building went faster than expected. Most targets are already uploaded (and not unexpectedly, Dos is lagging again, due to "first touched in months" syndrome.

 

Same story again, after a threat to remove it after 2.2.0, a few contributors stepped up, but run out of steam after a few months to a year, and the cycle repeats. Currently we apparently are at the bottom of the curve (with no dos related

action or questions on the maillist for months, maybe even close to an year, if you discount the release building past may. However the past may release was an incremental release from a stabilized branch, which didn't need

much action

 

).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×