I wonder and muse on whether, in reality, It was wise to give browsers the ability to render markup using the 'Tag Soup' rendering engine? or at least such a good ability.
Of course we know that without it more than half the web would vanish overnight, but I am constantly staggered by quite how effective it is at rendering code that is so completely wrong, code that you just stare at and honestly question how or why it has any right being parsed.
I further muse whether it may have been somehow! better to have had a mechanism by which if one wanted to make use of the power and flexibility and benefits derived from Standards and CSS that perhaps this should perhaps have been a function provided through the strict use of 'Strict DTD's only and that in using these the Tag Soup engine would be asked to be far less lenient with code parsing and rendering, thus clearly de-marking a line between good code and bad; I know that true XHTML provides for this and also that the XMl parser has been criticized for being far too strict and non fault tolerant, but I can only see that as a good thing , it says you Must write proper code or else we will return a page list of your errors, not that great a hardship or impossible goal to reach.
I cant help but feel that while a page can be so hopelessly mall-formed in markup and yet still render perfectly well, it provides yet another impediment to Standards really forcing people into good practises.
I know this isn't a serious argument, fault tolerant rendering is a necessity and one that is never going to be removed as such, just dreaming a little.
no, of silk purses and sow ears!
I think there are two different points mixed up in your note.
(1) Standards are good and should be encouraged as should the use of correctly formed code. Web professionals should endevour to ensure there code is properly formed and complies with relevant other best practices at the time.
The beauty of the web is its easy to make a page. Its not necessarily easy to make an easily maintainable XHTML strict, semantic & accessible page. But that is what should separate the professionals from the amateurs. I think the unfortunate thing is there are still professionals who aren't bothered with these aspects of their work. But I don't think its a problem that anybody with a PC and an internet link can build a working page.
(2) What user-agents do when they encounter improperly formed code.
There shouldn't be anything against them attempting to make proverbial silk purses out of sow ears. I don't see why it is necessary for the XML standard to require compliant user-agents to halt on detecting malformed code. To me that is just silly. Requirements like that can only work to put people off and give up in frustration. How much better the tag soup way. You always get something and as your knowledge and expertise improves you may even get what you want.
So, in response to your wandering muse -
- I think it is extremely important that browsers are able to render any old tag soup. And the web certainly wouldn't have got where it is today without it!
Ah there were probably many
Ah there were probably many points hiding away in that addled musing.
Sadly in response to the first, one could wonder what encouragement there really is to adopt standards, and yes I know most of the arguments but sometimes wonder whether those arguments seem only to hold sway in a rather select small community.
Although I stated otherwise, I do tend to actually agree with the camp that argued against the strictness of the XML parser, and agree entirely that the web wouldn't have got anywhere without the Tag Soup parser, it's just that there are occasions when I just think, looking at code, "Blimey, how the heck is that remotely possible to parse and render"
But who's benefiting from the support of tag soup?
While the rendering of tag soup might seam great generating more content, you also need to consider that tag soup is slowing us down in terms of progression. We're doing ourselves as developers a disservice when we say it's OK to create tag soup. The rules aren't that difficult to adhere to. By reducing the support for tag soup we're giving the content the focus it deserves and increasing our ability to do things with it. There are programmers out there that can have data whizzing around all over the place and they're just waiting for us middlemen to put our content in to a format they can play with.
As you know, not all data is accessed by a browser, and in the future we're going to have software/hardware that can do things we haven't thought possible with the data. Making that software work out what to do with important data shouldn't be the software programmers problem.
Or something like that...
Good points and something
Good points and something along the lines that I wus musing on
but I guess this can only come from teaching best practise rather than forcing it through error handling? maybe? Sometimes I just feel that Standards ought to stop being so woosey(sic) and start laying down the law.
Its not that tag soup is a
Its not that tag soup is a great way of generating more content. Its that having less restrictive requirements means the creation of web content is easier and therefore accessible to more people.
I don't think reducing support for tag soup does that at all. It will create a more elite web. Sure if you want to do something vital with your data demand a more stringent level of validty - but that should be the case anyway. I don't want my bank or my hospital to be guessing what to do because they recieved some improperly formatted XML. But I really don't mind if the rss feed for my granny's blog or a friends baby photos is 100% strictly compliant.
All of which goes back to my original paragraph. High standards should be demanded of professionals. For everyone else its encouragement, not - no if you can't do it, you're not allowed.
Why should the rss feed from
Why should the rss feed from your granny's blog or a friend's photo's not be 100% valid in their transport and display rules? Your granny will be using blogging software that was created by someone like you or me, so she shouldn't need to worry about the mark up. Your friends images are just content, so whatever software he uses to deliver them to you should be able to pass the content with his comments etc. clear and precise.
The longer it takes for the people who botch up your granny's blogging software to bring it up to scratch the longer it'll be until you'll be able to pull it up on your ZX07Gigathingamabob.
If your friend had emailed those pics to you they'd never get there if the mail client didn't say "helo, mail to, receipt from, data, etc." In fact, I can't think of another medium that would allow them to get to you without all the right information except the postal service and they have human intervention to make sure that just enough water gets into the envelope to make them all stick together.
Ultimately there is one thing that drives all this strict adherence to standards when it comes to xml and that's to allow software to talk to software without having to second guess what the other software developer was intending to send.
Cheers
Steve
Chris..S wrote: All of
All of which goes back to my original paragraph. High standards should be demanded of professionals. For everyone else its encouragement, not - no if you can't do it, you're not allowed.
Agreed, to say or think otherwise flies in the face of the whole spirit and ethos that has driven the web; but perhaps it's not sufficient to demand high standards of 'professionals' but that the standard needs to much more clearly defined and very demarked? we might understand and recognise standards when we see them but does everyone else. Is my bank - heavily into full online banking - working to the highest standards.
Some musings of my
Some musings of my own.
Fault tolerance in browsers is an error made worse by the browser wars.
There are only about 80 elements of html, and probably only about 15 or 20 that are in common usage. There is no sane reason anyone wanting to mark up a page can't learn them.
CSS is a powerful presentation language requiring well structured and syntactically correct html to work properly. CSS deals with more properties and selectors than html. Why shouldn't anyone wanting to use css be able to properly mark up his content?
Javascript fails on error. Anyone smart enough to cut and paste javascript, much less write their own, ought to be able to write html markup.
Zero tolerance is a Good Thing
I know this blog is months
I know this blog is months old, but it is still on the home page of the site, so I'll add my own musings.
HTML has a very gentle learning curve and that is it's incredible strength. My first webpage (if you could, in fact, call it that) was 'Hello World' typed into a text editor and saved with a .html extension. The moment I clicked the big blue 'e' that appeared on my desktop (Yes it was IE, but I was young and ignorant) and my words appeared inside a web browser I was hooked. The gratification was immediate and it promised unlimited power - the power to compete on an equal footing with the big guns. My content was, potentially, as important and as accessable as theirs.
That very simple beginning, and immediate success, spurred me to learn more. I read, researched and devoured every resource I could lay my hands on. And, many of those resources were (and still are) produced by people just like me. People who were not professionals, but enthusiastic amateurs who were trying things out and willing to share with others.
My desire to learn more led me, eventually, to this site and my knowledge has increased exponentially. I'd never heard of a doctype until two years ago, and my level of knowledge did not allow me to understand that they were important. I didn't use one until, perhaps, one year ago and then it was a simple cut and paste operation because people told me I needed to have one. It wasn't until a month ago, when I stumbled upon CSSCreator, that I fully understood just how important they are. However, if you were to ask me how they work, I'd mumble something about parsing and quickly change the subject.
I cringe when I look back at some of the things I did back at the beginning of my journey. The colours, the layout and the content look horrible to me now. However, I love them because I love all my children (even the ugly, clumsy ones). They are concrete markers of my own development (or lack therof). I'm sure I will cringe in the future about the things I am doing now.
I have a smattering of knowledge of PHP and Javascript. Just enough to get me by. I have done countless tutorials and read numerous books, but they are unforgiving task managers. The simplest errors, a misplaced semi-colon or inverted comma, results in a blank screen or an error message. I persevere, but it is a chore and i don't get the same kind of immediate satisfaction I get from my forays into HTML and CSS.
I would hate to see non-compliant code rejected. It would deny all those kids busily hacking away at their MySpace profiles an enormous amount of pleasure. And, it would deny us, the users of the internet, their talents and enthusiasm.
Just my AUD 2c worth.
Very valid points and
Very valid points and historically there are definite reasons why the sgml parser is forgiving, and I remember when I first - for fun - parsed an xhtml page correctly as application/xhtml+xml how odd it was that it should halt the rendering completely just because I forgot something silly like a closing slash on a meta tag and indeed people far more knowledgeable on the subject than I have commented about this as being unnecessary.
I was in part re-acting against the constant flood of bad code though and still remain surprised at times just how effective the tag soup parser is at attempting to render code that is so horrendous that one has to wonder whether it should really allow such bad coding to pass, whether it just fosters a culture that is never going to improve their coding as it simply - in their eyes - doesn't matter, this state of affairs also makes it that much harder to defend good coding practises.
When the student is ready
When the student is ready the teacher will come.
Many come here and, hopefully, leave better for the experience. I know I have.
Hugo wrote:one has to wonder
Maybe instead of either falling over at the first error or letting all errors through, the parser could keep a score of errors and then fall over past a certain point together with a friendly: "I'm sorry but your code is just too ugly for me to handle. Please try again."
Tyssen wrote:Hugo wrote:one
Maybe instead of either falling over at the first error or letting all errors through, the parser could keep a score of errors and then fall over past a certain point together with a friendly: "I'm sorry but your code is just too ugly for me to handle. Please try again."
" Sorry you have exceeded your allotted quota of validation fail checkpoints, your page must now be shredded, please try again , thank you and have a nice day!"
My two pennies worth
Mind if I express some of my thoughts?
My current website up and running (burlster.com) is the definition of an amateur personal home page. It has been running for 5 years now and has grown and evolved each year as my skills have developed from Version one to the current Version 4 : Gold Edition.
I'm not here to boast about my site, quite the contrary, it's awful. But it has been an important part of my life for a while and only recently have I started to learn about real web design. Where would I have learnt it before? I did computing at A-level 7 years ago and learnt basic HTML, and since then have taught myself the skills here and there for web design but things like 'doctypes' and 'meta tags' just scared me off.
I recently got a job as a professional web developer and consequently have learnt accessibility, standardised ways of doing things. That's all fine, but I wouldn't have wanted to be a web developer nor would I have had the basic skills to develop in the first place if I hadn't had a playground in which to exercise so my argument is that the web has to allow rubbish code. You wouldn't buy a house without looking round it first right?
For those of you wondering how I got the job, it's because I showed a passion to learn and become the best web developer I can be
Anyway, I'm inclined to feel that well written sites are a bonus and are appreciated, but I think this idea of badly written sites being illegal is a bit silly. When newspapers are getting told off because their typeface is too small, or shops are being asked to take down posters because there is not enough contrast between text and the background colour then I'll accept it a little more!
Phew, that'd be me over and out for now,
John
The web only has to allow
The web only has to allow rubbish code because it was realised that it was impossible to impose a rigid standards system as too many sites would break, as to this being a good thing sadly it is not, but little can be done about that. Standrds are there to ensure that developments can move forward yet always maintain backwards compatibility and that it's always possible to render older code, but we have a situation while the tag soup parser operates in the way it does to some extent supporting peoples ability to code badly and seemingly get away with that with impunity, not everyone wants to learn and better their techniques and skills.
The analogy to looking around a house is slightly spurious, nothing should prevent you practising but why should you not have to learn some basics first before your code will render?, you're not allowed to take a car out on your own and 'Improve' as you go (not that that's a particularly good analogy)
It's not about badly written sites being illegal, but it is silly that so much leeway is afforded.
I suppose we both sit just a
I suppose we both sit just a little bit on the other side of the line (required standards, good or bad), but not really enough for me to want to debate it. Problem is I agree with you. If higher standards were an actual requirement for the code to work then I would probably still be sat here as a web developer, I'd just have produced slightly better sites in the past.
I guess I just like the fact that anyone can type
HELLO
and there it is for them. It's the only programming language I can think of where you can programme it poorly, get the results you wanted regardless, then put it up for the whole world to see... and yeah, that might be an awful thing, but that's also part of HTML's charm. It's like the kid in the classroom who just gets on with everyone because they're never confrontational, agressive of intimidating.burlster wrote:You wouldn't
And you wouldn't buy a house from someone who had made it while they were just learning to develop their skills.
Touché
Touché
Hello world!
But, you see, that's perfectly valid except that the text requires a container, and the document (<html>) requires a title. How would having an example such as
scare anyone off? That's perfectly valid html4. Compare to a simple javascript instruction:my HELLO doc
Hello
Miss a single quotation mark or parenthesis and you have a fatal error.alert ("Hello");
I do believe we'd have been better off had UAs not been fault tolerant. That does not mean failure needs to be catastrophic. It would be sufficient that errors be ignored. Thus, for example, improper element nesting would be rendered as plain, unadorned, inline CDATA.
But we're stuck with it now, widespread broken pages and all.
cheers,
gary