Jump to content
Muxe Inc Forums

marco van de Voort

Members
  • Content count

    8
  • Joined

  • Last visited

Posts posted by marco van de Voort


  1. Btw, meanwhile I did a quick attempt to modernize FPC's Turbo Vision compatible lib (Free Vision) by migrating it from shortstring to ansistring (which is scheduled to become unicode aware in time).

     

    Unfortunately this is a no go. Like Delphi, FPC doesn't support initialization/finalization for TP "object" objects, making such attempt extremely painful.

     

    This means that nothing short of a rewrite on "class" basis will do.


  2. I do understand, your Unicode bit from down below is a good argument for this.

    Still I claim that bitrot can be prevented or minimized by carefully planing and updating APIs.

     

    One can. For an year etc. But Dos is already in this state since 2000. If we had implemented your policy FPC would still

    be in the same state as 2000.

     

    You probably expected the next answer from me:

    "The ones that implement a feature that has such an impact on the program."

    If a target does not support unicode or not yet, make sure it doesn't "bang".

    If possible make "fall through" code available that will handle these cases without touching the target system that may be in the

    need of a rewrite to support the new feature.

     

    That's possible if you support two targets, and all devels know a bit of both. But if you support 20 this would result in zero features

    being added, since nobody simply knows all platforms (and only a handful know about Dos /programming/ anyway).

     

    Same with newer targets needing newer GDB versions to debug (like e.g. win64), while there aren't even binaries for dos, let alone

    validated FPC bindings.

     

    Of course, if developers do not see or value the whole project (i.e. each target) then the target or project is lost...

     

    That would mean the targets could hold a stiffling hold on general progress, and nothing would happen anymore. And since Dos lacks almost

    everything, it would happen with nearly any change.

     

    While it is a target with very, very low usage. (windows or Linux downloads / Dos downloads is like 20000:1).

     

    Most devels do really value the whole project and each target, and assist the platform maintainers as much as they can. But you simply

    can't force people to spend all their free time for half an year, for a target apparently nobody is interested in anymore.

     

    And keep in mind that we are relatively mild already. Most projects have dropped dos 5 years or longer, leaving it to often changing 3rd party builders to try to make something of it.

     

    In the last 2 years, win9x is even starting to be phased out. (and actually we have decided to do that too, if we encounter a major problem with it)

     

    Who is being unfair now?

    The fact that NDN works quite well on GNU/Linux proves that I know more about "them", whatever "topic" and "their culture" than you imply that I know of.

     

    (don't take it personally, my discussion style is sometimes a bit direct, but never meant personally)

     

    Well, then you should know Unix has no binary tradition at all, and APIs are formulated not binary, but as C headers.

     

    I don't like it either, but it is a consistent different approach, and because you (and I) care about binary apis, you can't expect that everybody does.

     

    Yes, that's another thing that I'd like to my bitrot list from my last post:

    - Project can be left and unmaintained at any time, no matter if finished or not, since it's open source and someone else can pick it up

    An entertaining site on this: http://linuxhaters.blogspot.com/

     

    That site is pathetic in an adolescent way. Half of the people that erect such sites are using pirated versions of Windows anyway. (with which they more or less prove that Windows is not worth the cost either)

     

    They are nothing but disgruntled amateur users which have had a free ride on expensive commercial tools in the past, and now think that because of that they have a _right_) to such support and listening

    to their wishes. It doesn't work this way (and Microsoft hardly listens to end users anyway, unless there is a major revolt like the Me or Vista cases. Their agenda is mostly set by big corporate business,

    with the knowledge that medium/small business generally follows their lead)

     

    Moreover they think can exert pressure that way, but have no clue about the real tradeoffs the real developers face, and don't offer any solutions. They should simply buy a Windows version,

    a full Visual Studio suite if they can't handle anything else. Then Microsoft support will tell them to bugger off because they don't have a very expensive support plan, and they'll probably erect a site to daemonize them ;-)

     

    There's no good moment to replace a complete development system. This is a choice of all or nothing.

     

    The final switch is. But testing and working on migrational aspects bit by bit can go a long way. And that was exactly my point. If you are waiting for the perfect moment without actively planning

    a migration, one can wait forever. It is very clearly visible in e.g. parts of the TP crowd that is still waiting for a compiler that mimics I/O ports and direct screenwrites etc, while really looking

    openminded at other alternatives would have maybe given a bit more overview, and a realization that the single digit MHz years and dos-only hardware interfacing are behind us. Long behind us.

     

    The same for the VP crowd. Always whining about not ready, and never doing something. Always the same entry level questions, and waiting for somebody to do it for them. FPC core is not unwilling

    to do something about it, but if you try to deal with them, there is nothing but loose sand.

     

    ...then I should port everything to C/C++ to solve all future problems and make all target operating systems possible.

     

    Good luck :-) I have thought about this myself in the past for work purposes, but the problem is that I haven't really seen a system/lazarus like FPC for C/C++ that has a balanced approach to multiplatformness. Usually it is heavily biassed towards *nix, and a huge rift between the unix centric users and the dos/OS/2/ Windows users. OS X again is totally different with its Objective C approach.

     

    And since in most scenario's, Windows is the majority platform, it simply doesn't do to have only proof of concept software there. C++/QT comes closest, but I prefer my widget sets a bit more open to working around problems with API calls.

    Or maybe Java for purely desktop cases. But both are compromises I don't really like. It helps that at least the non-OS dependant libraries (like Boost) are getting more standarized now. My Borland C++ (2009 edition) came with boost.

     

    In short, I decided to postpone multiplatform (at work) as long as possible, and try a Delphi/Lazarus combination if I can't avoid it anymore.

     

    Good luck.

     

    RC1 building went faster than expected. Most targets are already uploaded (and not unexpectedly, Dos is lagging again, due to "first touched in months" syndrome.

     

    Same story again, after a threat to remove it after 2.2.0, a few contributors stepped up, but run out of steam after a few months to a year, and the cycle repeats. Currently we apparently are at the bottom of the curve (with no dos related

    action or questions on the maillist for months, maybe even close to an year, if you discount the release building past may. However the past may release was an incremental release from a stabilized branch, which didn't need

    much action

     

    ).


  3. So, shouldn't the updated RTL be able to handle the fact that not all targets will

    be on the same level? At least there should be no BANG. :)

     

    No of course not. You also assume that the existing code adheres perfectly to the abstraction

    in the first place, which is unlikely after a long gap and minimal fixes.

     

    A bang can be simply a compilation failure, or even a fully working port at

    an initial glance (with problems showing later). If you have this a couple of times over the

    7-8 years that the dosport is unmaintained and only provisionally/minimally fixed in an hurry

    just before a release, at a certain point the chance on this is bigger than that it will simply work.

    (and of course you don't notice the things that simply keep working, we are already talking about the exceptions to that here.)

     

    And that is the bitrot bit. The first change is not the problem, the strain of many changes and

    quick fixes over years is. If you have one problem only, you can quickly binary search to the

    cause and resolve it. If you encounter many problems just before a release, often the causal

    bond between what happened cannot be found so easily.

     

    Keep in mind we are talking about 25000 commits from 20-30 persons in 3/4 million of lines of code,

    many of which are quite sensitive.

     

    Agreed - I just think that all these reasons should not render a port useless, even if it's not at

    the same level as the most popular ports.

     

    Nearly any solution needs active maintenance. Check with a suite? Somebody has to check the results and add

    the relevant tests for new systems. Test a release thoroughly? Need betatesters. Work with the IDE ? Needs devels +testers.

     

    That's true too of course, if you break or change an abstraction layer that already worked...

     

    This works perfectly indeed. But only in frozen systems that are minorly bugfixed. Not in actively developed systems. Currently

    FPC is gearing up to do unicode, and have modes where the default string type is unicode. For DOS there is more work to do than

    for other systems, and a lot of existing code needs to be checked for unicode cleanness, who is going to do it?

     

    My favourite topic... Bitrot is especially true for Linux and FreeBSD where the "developers" behave like this:

    - Source code is the best documentation

    - No need to keep APIs or constants "constant" because the source code is available and can be recompiled on any system anytime

    - Writing APIs or drivers for freaks not end users

     

    I think this is unfair. I don't like the choices they have made either, but they are from a different culture, and there actually were damn

    good reasons for them at the time. Attributing them to laziness just shows you know nothing of the topic or culture.

     

    I do think *nix development, specially the free ones hasvea tendency to be too evolutionary, and not phase the evolution properly. This is

    an opensource disease. But like many open source disease, all are responsible, since there is simply nobody to be found wanting to do

    that work.

     

    Mine isn't. :P

    No productivity at all while the months that will be spent without producing any new value, i.e. feature improvments or corrections.

    The risk of breaking existing functionality, esp. in large and/or complex projects. Even more in "dangerous" software projects that

    involve machines f.ex.

     

    Well, that is the penalty for having been an ostrich for too long. That is a symptom that you have exploited the old codebase too long, and not invested

    in improvement/rewrites when you still had the time. I'm in machine manufactoring too (machine vision for the paper and bottle industry), and we always

    have a branch of stable software and work

    at the same time on the next generation/rewrite, and the change of branches to production is a carefully orchestrated move, where we start with

    nearby, trusted clients with less demanding systems, and then slowly roll them out over the whole range.

     

    And there is a parallel here with Virtual Pascal. Most of its users have been ostriches too long too, and now must face the whole transition in one

    short time, or give up. And that is painful, very painful, and much more painful than needed, but it is their own fault. They have known that there was

    a fair chance that VP was doomed since at least 2003.

     

    The customer might be interested in the reasons for why he get's a software update.

    If his system or machine won't work afterwards you might have to explain why.

     

    Working in convoluted codebases carries risk too. Often rewrites are also done to stamp out stuff that doesn't scale.

     

    A lot of our reworks have to do to make codebases maintainable with less risk on making mistakes on modification.

    Of course this has to be weighted against the likelyhood that there will be significant modification at all. Existing customers all

    have their own branches in SVN for this purpose.

     

    Everything is always a tradeoff.

     

    High productivity can be interpreted in several ways:

    - concerning NDN: many releases with new features and bug fixes each year, introducing less to no new bugs per release, maybe even adding a new OS target someday

    - concerning my company: selling many machines each year, with hardly any stop times, producing a lot of material the customer can sell

     

    NDN: you'll have to go to a new compiler first, and that was what this thread was originally about.

     

    workwise: we are in quality control, so we only lower production (at least that is the client's perception) <_<

     

    Anyway, I have a 2.4.0 to get out. I hope to have a RC ready this year.


  4. Which I won't use anyway, maybe I will merge some code but that's it. :)

     

    I already guessed that.

     

    LFN is vital of course.

    I am curious: what DOS supports working 2/4GB+ file support?

    I now that there is FAT32+, but I don't know if it really works.

     

    There are some with TSRs. But it was more meant relative to TP, less to VP which

    I assume supports TSRs under windows.

     

    I am not talking about a total rewrite of a software largely written in assembler.

    My point is that the argument of such software not being able to be maintained or

    updated/improved is invalid, IMHO. There's always someone who can do it.

     

    Sure, just like somebody COULD port it to ARM if he invested enough time. The question is if it is sane.

     

    Anyway, your ARM-VP-port argument is unfair. :P

    VP probably never was designed to be ported to other OS's or to support non-x86 CPUs.

     

    I don't see it that way. Sure, it is more extreme, but it demonstrates that the project was not rigged

    for portability and modularity. Hindering maintenance and extension.

     

    This is what I don't understand, especially concerning a compiler (sorry for repeating myself again):

    Software does not need to change low level (OS) code constantly with new features or bugfixes.

     

    The OS code plugs into the generic part of the RTL, that is updated/extended etc. Some OS dependant

    part doesn't contain proper initialization for the new part and BANG.

     

    If you have to do that then you probably have a code design problem.

    Higher level abstractions should work with most new features independently of the target OS.

     

    That is something from the software development education ivory tower.

    Even though it's true, no abstraction that models something real world is ever perfect, specially when large enough.

    The abstractions are occasionally adapted, and not every adaption or extension of the abstraction is entirely without code changes in platform dependant code.

    (e.g. due to initializations). Typical reasons are OSes and architectures getting added or mutate,

    increased Delphi compatibility (which can

    be surprising lowlevel), which require a (slightly) different or wider abstraction.

     

    And even existing code is not entirely free of change. E.g. the introduction of threadvars required

    hunting down places where pointers to global vars might be checked.

     

    Moreover, how do you know that the abstraction really saved you without testing it, and you haven't violated it? And then we

    were back to the fact that those platforms (OS/2 and Dos) are only stressed around releases and not

    continuously during.

     

    In the FreeBSD/Linux kernel project, they have a word for what happens with unmaintained code that is not validated as the code

    around it changes: BITROT.

     

    I don't have any illusion that VP or TP would be very different and break compatibility occasionally if they still were developped actively.

     

    Yes, because there's no time to waste.

    There's no good argument that explains to your boss or customers why you have

    to "waste" quite some time just to use another development software.

    Especially if there will be no "visible" improvment after the software switch.

     

    Then it is a bad boss. Productivity and risk management arguments should appeal to a boss. Is your boss an ex-salesperson? :-)

    And the problem with the customers is why you try to explain that in the first place? They don't have enough context to understand it anyway. Just do the migration and change a few colors. Done.

     

    Seriously, enough with the BOFH talk, if this were universially true, why aren't we doing our accountancy records like the Flintstones did? Carving them in stone? It works in principle, and you never have to account for a migration to a new technology. Right! productivity goes down the drain.


  5. But it might behave differently than what NDN requires, hence RTL modifications for NDN would maybe be required too.

     

    That goes for an updated VP too.

     

    I do plan expansion.

    The possibility to use C libraries actually gives me enough possibilities.

    If I cannot use existing code, I will write my own.

     

    Well that is the whole point. Those libs in FPC (and possibly heaps more in the Delphi space) are existing code :-)

     

    What reasons are there to keep an older software/OS alive? Because...

    - there are still users out there (esp. concerning DOS)

    - it's entertaining to support all possible targets

    - keeping it for historical interests and future references, not losing knowledge

    The question here is, why does a *compiler* actually have to update it's older targets all the time?

     

    My remark was purely from the practical side. You have to have something releasable, and you have have something interested (entertained if you will) enough to keep it releasable. Old versions are enough for historical interest, this is about new releases.

     

    (why update old targets)

    To have bugfixes and other improvements in the project propagate. And to react to changes in the OS and other

    dependant parts (in our case: GDB, but also >32-bit filesupport, LFN, sharing etc, sockets) .

     

    Keeping a release working is far less work then brushing up a forgotten port and bring it up to workable state

    again if something blocking appears. Because then the knowledge must be quickly rediscovered, which is usually hard.

    Exactly the same reasoning is why I suggest starting playing with FPC, just to get a feel and acquire some knowledge

    at a calm, but steady pace.

     

    From the language point of view there should be no need to change anything target specific

    (btw, except for a few glitches the VP language support is good enough for me).

     

    Releases are never made aimed at just one person. One tries to compile on a network drive, somebody else uses a codepage type not yet supported, sb else wants to copy a DVD image that is >4GB, somebody else uses the IDE in changed resolutions etc. At some point it is just easier to push out the current state, then to decide which "for all platform" fixes to propagate back to the RTL/FCL of some old version. And then I'm not even talking about who is willing to waste time on that.

     

    Of course, if the RTL is not complete, like missing LFN, then you might need to update it.

    But at some point of development it should be possible to drop active development

    without making the DOS port unusable in future releases.

    What makes the FPC core devs having to update the DOS RTL constantly over years?

     

    This is often assumed, but totally not workable.

     

    The constant updaing is fixing bugs, providing support for higher level abstractions that are needed to keep crossplatform stuff like the IDE running.

    Note that the amount of fixes are not that high, but they have to be done, or at least tracked. But as much it is also validation of changes to the generic part.

     

    I don't agree on the assembler statement, but don't let us start a discussion on that. ;)

     

    I program assembler professionally sometimes. I don't mind it, and I'm not an anti assembler fanatic. But there are serious management issues with non-trivial amounts of assembler. Don't believe me, read any software management book. If you don't agree, I'd like to see a ARM port of VP.

     

    FP DOS == OS/2:

    Why should newer FP releases suddenly break/lose the OS/2 support?

     

    Same reasons as with dos. Stopping of maintenance over several years is effective death in a fast moving project (14000 revisions since 2005). Tackling a problem when it occurs, and you can communicate with the person would did it, and influence the design maybe is way more productive.

     

    OS/2 is a bigger problem, because it was not even complete.

     

    (1)Yes, VP as NDN users are pretty conservative. But you get used to it and become careful with

    changes and new features. In fact that is the real life situation in professional projects.

    (2)I personally would have no problem with a maintenance mode, except for the possibility to add new compiler targets.

     

    (1)

    True. But in the professional work, those are often in minimal maintenance only mode. Professionals usually migrate when they are stuck in dead end with a codebase that is still actively maintenance and development.

     

    (2)

    To be honest, VP is not even in maintenance mode atm. Last version is 4 years old. Yes, maybe a minimal patch set is coming, and that is a good thing, but one minor patch set in 4 years is not even maintenance anymore.

     

    (1) I totally understand you, if I would be maintainer/user of FP I would have hoped the same to happen as you did.

    Fact is, it didn't happen.

    (2) But even if DOS users will send FP bug reports, who will fix them?

    In fact you not only need the users, but users who can fix the problems too.

     

    (1)

    Fact is that effectively the VPascal community evaporated to other languages, except for some tinkerers and some people maintaining aging codebases.

    That "defection" is what I wanted to avoid, more than winning souls for FPC. I know I'm automatically suspect because of my FPC involvement, but my involvement in VP has always been about what is best VP's community from my viewpoint.

     

    And another important point is that I don't think there was ever a serious chance that VP would be revived in 2004-2005. Though the reality (not even continuous minimal maintenance) was even worse than I expected.

     

    (2)

    Well, mostly the people fixing FPC dos bugs. In the mergercase that team would have been strengthened with some VP people and intermediate users (quality bugreports and reseach is almost as important as the fix itself). And that resulting "keep dos alive" team would win not having to provide fixes for bugs that are not dosspecific (the bulk)

     

    I have to repeat myself:

    As long as VP is good enough for my future NDN plans, I will keep using it.

    If I ever have to switch to FP, I will let you know! :)

     

    I understood you the first time, and repeated myself in explaining that I thought (and think) it is a bad choice, but what can I do? :-)

     

    The main thing I don't understand so much is not that people with VP, but more that they don't start testing with FPC in paralel, to keep a bit of continueity if some blocking problems pops up. Because when that happens it is too much change require too fast to migrate to FPC. (and of course FPC then gets the blame of being incompatible,buggy and incomplete).


  6. Here's the first problem I see (if understand you correctly):

    NDN *is* a DOS program. How many TP compatible RTL calls would I have to replace

    with FP compatible ones?

     

    Not that many I think. It is more that the behaviour may differ slightly. Just like e.g. I/O and rules for filenames with a LFN unit are slightly different from original Dos.

     

    The points was more that we don't go as far as emulating dos specific quirks in the behaviour of calls like VP does. At least afaik.

     

    I don't want to call it mutating. But from my point of view all RTL targets should behave the same

    (if possible). So yes, the VP W32 and LNX RTL had to be modified, because they were not written

    with that in mind.

    I wonder what RTL comparison results I would get with FP.

     

    I think in general the effect would be beneficial. You might be able to clean up a lot since the FPC rtl is simply richer, and designed for portability.

     

    Well, I asked myself the same all the time the past 6 years.

    Who cares for NDN? Or how many?

    There are some die hard and loyal users which stayed with me over the years.

    But there's no really big community - at least not obviously.

     

    Still I enjoy working on NDN because there are so many things I can

    try out and I can add almost everything to NDN that makes using a

    computer worthwhile. Like DOS FTP support.

     

    It works the same for me with FPC. Current FPC is quite fine, but say in the late 2003 timeframe the state was somewhat depressing.

     

    I look forward to 2GB+ support and getting the LNX port back on track.

    As soon as I hit plugins it will get really interesting.

    Even if NDN is progressing slowly like the past few months,

    I still don't have the feeling it's not worth investing more time into it.

     

    If you plan expansion rather than mere maintenance, I'd look into FPC. It will expand with you, and surprise you occasionally. Like the complete DBF,CHM, various images support.

     

    I don't know all your reasons for working on FP.

    If you don't feel it's worth your time then you have to stop it and

    find something new.

     

    I have no problems like that with FPC, but I'm way more ambivalent about the Dos port. The main reason I work on it from time to time is that I have a weak spot for fullscreen textmode programs, and work on the IDE occasionally. And most of the IDE bugreporters are on dos.

     

    It will be a mistake to let the DOS port die completely, maybe even

    not supporting it anymore at all in future releases.

     

    Please provide reasons. Moreover, FPC core didn't let it die, the Dos users did, by not participating in development enough. The FPC core kept it on life support for years, just in the hope that somebody would step up. But at some point attrophy sets in and the result is no longer shippable. That happened during 2.2.0

     

    But it is already quite hard to find enough users using and testing to create bugreports, let alone developers. It's the same problem as with VP. You can't just will something alive, somebody with an interest must be there to do the work.

     

    But luckily FPC is very maintainable, which is when people (like Tomas and Giullio an year back) invest some time in the dos port, it quickly improves quality again, and probably now it will be ok again for a while. Till attrophy sets in again, or regular patches, testing and (quality) bugreports from dos user start flowing again. It only takes one dedicated person.....

     

    Also there was Laaca (from bttr forum) who has posted some significant dos related patches the last two years.

     

    Mostly since only the dos-specific parts are to consider, which is in the 290kb magnitude. Excluding graph.

     

    As much as it was a failure to drop the active VP development,

    especially new targets...

     

    What I have seen from it, it was completely sane. VP simply had no development community/culture, and as soon as the major contributors (Vitaly, Alan) dropped out, it was dead in the water. There hadn't been significant progress in the years before the end, and on the compiler the standstill was even longer (both feature and bugwise). Here is what I remember from it:

     

    I have been in contact with Alan occasionally since 2000. Iirc the first mail was about how VP handled the Turbo Vision copyright situation which was hurting us bad back then (1999-2000).

     

    During 2003 or 2004 Alan offered us a copy of the VP sourcecode, to see if we could use parts. (But not yet the permission to actually do) However the full compiler is in assembler makes it essentially "don't touch". The IDE is hampered by years of workarounds and copyrighted code. The documentation needed commercial tools. Nothing was really directly reusable.

     

    While a minor disappointment, that was not that much of a problem for us, the main attractions were more in the OS/2 direction, where VP has been traditionally strong. However FPC OS/2 development slowed down during that same period due to responsible people graduating iirc (and hasn't recovered since), and new OS/2's are virtually non-existant, so there were simply no people to follow that up.

     

    So most of the VP code was quite totally unmaintainable and the rewrite to tackle that needed to restart VP development, would have alienated the extremely conservative users. (which were with VP and not FPC exactly because of that conservatism). I never have believed in a succesful restart of VP.

     

    An attempt would not survive the two years (or longer) to totally rewrite, cleanup and replace copyrighted parts. (It took FPC two years to just replace TV), and Pascal is not popular enough to survive that.

     

    But the biggest problem is that the main attraction of people to VP, the stability would be compromised. So I think it would even not work for the current userbase given enough developers. Somehow the VP users don't see that, that development and stability in one and the same codebase are mutually exclusive.

     

    Anyway, I digress, back to the timelime:

     

    Personally I would have liked VP to have been put in maintenance mode in 2003-2004 and then direct all future efforts into making migrating to FPC possible. A merge of the projects at arms lengths so to say. I proposed that to Alan then, but he had doubts. Meanwhile heaps of indignant users pledged solemn allegiance on the maillist to start developing VP, and Alan kept VP alive. I (and anybody else who had done any work in large Pascal projects) had serious doubts, since clearly nobody of them had compiler experience, and most didn't even have experience enough to make bugfixes.

     

    I didn't want to be a killjoy though, so I helped Noah Silva to backport FPC's sysutils to VP (to fix one of the copyrighted source problems), but that died out because Noah dropped out. Afaik he realised how much of Delphi compability was really missing in VP, and this wouldn't be a onetime effort.

     

    The only one who afaik really did something was Veit, who worked quite hard, but he is more a RTL/library maintainer like I am, not a compiler devel. At least not yet.

     

    1 1/2 years later nothing had happened effectively, and Alan pulled the plug. He has once made a remark that I probably was right back then proposing to put VP in bugfix only mode in 2003 (roughly the state it is in now), and direct future users and efforst towards FPC.

     

    FPC had less to gain than many people think. Essentially we hoped that Veit would cross over, at least partially, and that the understaffed "old" ports Dos and OS/2 would have given a gust of life. Maybe some of the stability freaks could have been used to maintain older FPC versions with bugfixes only to get very stable FPC releases too.


  7. Yes, UTF is unsolved at the moment, but I really never tried to do anything about it.

    No known problems with filenames though.

     

    (Slowly, the unices are adopting UTF-8 consoles)

     

    But the main problem here is limited time:

     

    Yes, I know the feeling.

     

    I don't want to spend or waste time on a not yet necessary compiler change.

    As I described above, it would take months just to get NDN back to where it is right now.

     

    True. But if you don't start modest preparations, sooner or later you'll hit a brick wall. And workarounds cost a lot of time also.

     

    As my todo list is quite long, I rather spend my time on bugfixing and new features,

    until the day will come when VP is not working anymore for NDN...

     

    Ok. Well, as long as you realize that, it's fine.

     

    Still it is unfortunate. As said before the decline of the older targets (Dos, OS/2 and Amiga to a lesser degree) is mostly due to

    similar reasoning of all involved users. (and a bit the fact that all FPC devels moved on, some already adopting Delphi at work when it first

    came out)

     

    The TV code was extended a lot.

    And my main concerne is the OS dependend RTL code:

    Every RTL of the 3 VP targets was highly improved and rewritten.

    I would have to spend a long time, doing the same to another RTL,

    until it behaves like it should.

     

    Partially yes, while the FPC RTL is much more battletested on non-dos (win32/64, *nix, OS X), it is much less a

    Dos emulation like VP's is, due to the larger number of platforms, and a much,much higher percentage of

    users whose current development efforts don't date back to Dos TP times.

     

    Moreover, mutating the RTL too heavily would just move you from one island (VP) to the next ( a specific FPC version, frozen in time)

     

    Still it is a pity that nobody won't even try, because we ourselves don't know the exact situation of the older FPC ports. How much of it is

    simple some minor attrition, and how much is fundamental. For that you really need true users, that make detailed reports with narrowed down tests etc.

     

    And to be honest, if no users wants to invest in Dos anymore, I'm having doubts investing in it as a FPC developer too. Maybe it is time to let it die.


  8. Some opinions about various topics in this thread:

     

    About FPC usage:

    the Dos port was indeed in decline, even though Dos was at one time the core platform of FPC. However with 2.2.2 the platform will hopefully be closer back to releaseable compability.

     

    Even if you stick to VP, getting some FPC testing done might be useful. It improves FPC, and allows to investigate the problems. Specially since its "Vision" part is copyright free. FPC on Linux/windows should generally be way better than VP, except maybe debugging on windows.

     

    One of the reasons that the FPC port is in decline is exactly this, when the hordes moved on, Dos usage ground to a halt, and the few remaining don't want to invest anytime anymore, and keep patching old stuff (TP,VP) indefinitely. Which is a pity. At least try to experiment and file bugs a bit. As said the new 2.2.2 release should at least get the dos port closer back to the quality it had during later 1.0 times.

     

    If there are issues with the *nix FPC rtl, I'd gladly here them, since I'm the maintainer. Note that FPC made enormous progress after the original postings to this thread, first and for all the entirely redesigned Unix ports. However some of the issues that you (probably) have now (utf-8 and long file names in TV) are there in FPC too. It is unclear what to do about that, upgrade TV to be more Delphi dialect compatible, or keep it the way it is. The point is a bit academic, since nobody is using Turbo Vision under FPC, and it is only minimally fixed from time to time to keep the textmode IDE running.

     

    Same with unicode and threads. They are supported on non dos targets, but not (yet) on dos. However these kinds of functionality are implemented using manager records (records with procedure variables) and thus pluggable by the user without even a recompile, at least in principle.

     

    About "libc and Linux".

     

    The libc interface is not a hard API, with a defined binary interface. This is because it is governed by Unix standards like POSIX, but that is a different way of specifying an API, and being able to use the system compiler is one of the requirements of that way. IOW if you are not the system compiler, you might not be able to interface with a posix interface in the way it is meant to be (read: using OS provided headers).

     

    So while the man pages declarations keep the same, types and macro implementations often change. Specially the threading related part generaly breaks every major release, and sometimes in between. The comparison

     

    The distro maintainers that keep 3rd party C/C++ packages running are pretty useless for anything outside the basic "configure; make ; make install" run. They are already confused if you use a different C compiler (I can remember problems enough with packages using TenDRA, lcc or pgg)

     

    So IMHO the arguments about Libc linking in this thread are slightly naieve. Show me a nineties shared linux binary that is still working without compability settings or libs, and I'll show you my 1998 FPC binaries. Even statically linux binaries using libc probably won't work anymore.

     

    Sockets are btw perfectly doable using kernels. However DNS resolving, and some forms of unicode use are things better left to the OS (this because resolving is pluggable in libc for directory systems)

×