One extremely important XSLT use-case is for RSS/Atom feeds. Right now, clicking on a link to feed brings up a wall of XML (or worse, a download link). If the feed has an XSLT stylesheet, it can be presented in a way that a newcomer can understand and use.
I realize that not that many feeds are actually doing this, but that's because feed authors are tech-savvy and know what to do with an RSS/Atom link.
But someone who hasn't seen/used an RSS reader will see a wall of plain-text gibberish (or a prompt to download the wall of gibberish).
XSLT is currently the only way to make feeds into something that can still be viewed.
I think RSS/Atom are key technologies for the open web, and discovery is extremely important. Cancelling XSLT is going in the wrong direction (IMHO).
I've done a bunch of things to try to get people to use XSLT in their feeds: https://www.rss.style/
> “They don't put ads on their sites, so I'm not surprised…”
Similarly, Chrome regularly breaks or outright drops support for web features used only in private enterprise networks. Think NTLM or Kerberos authentication, private CA revocation list checking, that kind of thing.
Replaced them with App stores, why one code base when you can have N code bases: web sites, ios, android , tv …
cheaper, privacy-oriented and more secure lol obviously not, doesn’t help the consumer or the developer.
Xslt is brilliant at transforming raw data, a tree or table for example, without having to install Office apps or paying a number of providers to simply view it without massive disruption loops.
Gotta love the reference to the <link> header element. There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
> There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
This is actually a feature of Orion[0], and among the reasons why I believe it to be one of the most (power) user-oriented browsers in active development.
It's such a basic thing that there's really no good reason to remove the feature outright (as mainstream browsers have), especially when the cited reason is to "reduce clutter" which has been added back tenfold with garbage like chatbots and shopping assistants.
Man, reaching way back in history here, but this reminds me of why I stopped contributing to Mozilla decades ago. My contribution was the link toolbar, that was supposed to give a UI representation of the canonical link elements like next and prev and whatnot. At the last minute before a major release some jerkhole of a product manager at AOL cut my feature from the release. It's incredible the way such pretty bureaucrats have shaped web browsers over the years.
Good user-facing software tends to have a coherent vision, and that involves getting features cut that people put a lot of time and effort into; even though those features have value, it's possible they don't have value in the product under development.
I don't really have enough context to say whether that was the case here. Mostly I'm raising the comment to note that this is an issue in commercial software too, but the sting is immediately moderated by "At least you got paid." It's a lot easier to see one's work fail to be reflected in the finished product when you can dry your tears with the bills in your money-pile (and I don't know how open source competes in things as cut-throat and taste-opinionated as UI when that continues to be true without solving the problem by injecting money into the process, which carries its own risks).
iIRC, all of the proposed workarounds involved updating the sites using XSLT, which may not always be particularly easy, or even something publishers will realize they need to do.
For RSS/Atom feeds presented as links in a site (for convenience to users), developers can always offer a simple preview for the feed output using: https://feedreader.xyz/
Isn't this kind of an argument for dropping it? Yeah it would be great if it was in use but even the people who are clicking and providing RSS feeds don't seem to care that much.
You are probably right, but it is depressing how techies don't see the big picture & don't want to provide an on-ramp to the RSS/Atom world for newcomers.
Google is widely faulted with effectively killing RSS by pulling the plug on Reader (I, for example, haven’t used RSS since), so I don’t think they’re missing the big picture, I think they just prefer a different picture
It's probably worth considering that if the technology could be killed by one company pulling its chips off the board, perhaps the technology wasn't standing on its own.
We still use RSS and Atom feeds for podcasts. It's a pretty widely-adopted use case. Perhaps there is a lot more to the contraction of RSS as a way for discovering publishing of "blog"-style media than "Reader got killed" (it seems like Reader offered more features than just RSS consolidation that someone could, hypothetically, build... But nobody has yet?).
Native apps are always better, but having a web page syncing your feeds made it easier to access them, eg from the library or work computer. Not to mention nothing to install (or update) reduces friction. I didn’t have to stop using RSS, but the newly exposed hurdles were enough discouragement that I did stop
1. Everyone who uses a static site generator can add XSLT
2. Everyone who doesn't use a static site generate only has to add the XSLT file and add a single line to the XML. No need to write any code: new code is not a big deal for many HN readers, but not every blog author is a coder.
I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
Another point: it is shocking how many feeds have errors in them. I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
Even RSS wizards would benefit from looking at a human-readable version instead of raw XML.
> I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
I’m sceptical about your analysis, because your tool makes spurious complaints about my feed <https://chrismorgan.info/feed.xml> which show that it’s not parsing XML correctly. For stupid reasons¹ that I decided not to fix or work around, many of the slashes are encoded as /, which is perfectly valid, but your tool fails to decode the character references inside attribute values. I don’t know what dodgy parser you’re using, it’s possible this is the only thing it gets wrong about parsing XML², but it doesn’t instil confidence. I would expect a strict XML parser to be more reliable. I’ve literally only once encountered a feed that was invalid XML³. Liberal parsing is not a virtue, it’s fragile in a different way. Postel was wrong.
—⁂—
¹ I wish OWASP’s XSS protection cheat sheet had never been written. I will say no more.
² Honestly, parsing XML isn’t very hard; once you’re past the prologue, there are literally only about seven simple concepts to deal with (element, attribute, text, processing instructions, comments, cdata, character/entity references), with straightforward interactions. Not decoding references in attribute values is a mind-boggling oversight to me.
³ WordPress thinks it’s okay to encode U+0003 as  in an XML 1.0 document.
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
Maybe it's more for people who have no idea what RSS is and click on the intriguing icon. If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
So that excludes you from the "someone who hasn't seen/used a RSS reader" demographic mentioned in the comment you are replying to.
I never realized styling RSS feeds was an options. Now looking at some of the examples, I wonder how many times I've clicked on "Feed", then rolled my eyes and closed it because I thought it wasn't RSS. More than zero, I'm sure.
> Cancelling XSLT is going in the wrong direction (IMHO).
XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away. The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
It seems like something an extension ought to be capable of, and if not, fix the extension API so it can. In firefox I think it would be a full-blown plugin, which is a lower-level thing than an extension, but I don't know whether Chromium even has a concept of such a thing.
> XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away.
Not having it available from the browser really reduces the ability to use it in many cases, and lots of the nonbrowser XSLT ecosystem relies on the same insecure, unmaintained implementation. There is at least one major alternative (Saxon), and if browser support was switching backing implementation rather than just ending support, “XSLT isn’t going anywhere” would be a more natural conclusion, but that’s not, for whatever reason, the case.
I don’t see anything that looks remotely like a normative argument about what browsers should or should not do anywhere in my post that you are responding to, did you perhaps mean to respond to some other post?
My point was that the decision to remove XSLT support from browsers rather than replacing the insecure, unmaintained implementation with a secure, maintained implementation is an indicator opposed to the claim "XSLT isn’t going anywhere”. I am not arguing anything at all about what browser vendors should do.
The idea is that if they did so, the people using software running in the browser could continue to use XSLT with just the browser platform because the functionality would still be there with a different backend implementation, but instead that in-browser XSLT functionality is going somewhere, specifically, away.
Right but either way, the vulnerability exists today, and you're saying that whether or not the browser platform supports the functionality that harbors the vulnerabilities, the browser platform should be responsible for resolving those vulnerabilities. That's how I read it.
> and you're saying that whether or not the browser platform supports the functionality that harbors the vulnerabilities, the browser platform should be responsible for resolving those vulnerabilities.
No, I'm not (and I keep saying this explicitly) saying that browsers should or should not do anything, or be responsible for anything. I’m not making a normative argument, at all.
I am stating, descriptively, that browser vendors choosing to remove XSLT functionality rather than repairing it by using an alternative implementation is very directly contrary to the claim made upthread that “XSLT isn’t going anywhere”. It is being removed from the the most popular application platform in existence, with developers being required to bring their own implementation for what was previously functionality supported by the platform. I am not saying that this is good or bad or that anyone should or should not do anything differently or making any argument about where responsibility for anything related to this lies.
They did, the issue is that the improved Web platform they invested so much to build and maintain has no use for XSLT, which is obsolete in the modern world of good JavaScript, JSON and modern Fetch APIs.
Google decided to drop XSLT, because the volunteer-maintained libxslt had no maintainers for some time. So, instead of helping the project, they just decided to remove a feature.
Were you born before or after heartbleed uncovered the sorry state of OpenSSL and the complete absence of funding it was maintained under?
So to answer your question: Every single one of them, from Google with its billions, to Mozilla with Googles billions, none of them would spend even a cent on critical open source projects they relied on as long as they could get away with it.
Almost all of them? as I recall there was a single volunteer developer maintaining the xml/xslt libraries they were using.
Wasn't it similar with openssl 13+ years ago? Few volunteer maintainers, and only after a couple of major vulnerabilities money got thrown at that project?
I'm sure there's more and that's why the famous xkcd comic is always of relevance.
As others have pointed out, there are other options for styling XML that work well enough in practice. You can also do content negotiation on the server, so that a browser requesting an html document will get the human-readable version, while any feed reader will be sent the XML version. (If you render the html page with XSLT, you can even take advantage of better XSLT implementations where you don't need to work around bugs and cross-platform jank.) Or you can rely on `link` tags, letting users submit your homepage to their feed reader, and having the feed reader figure out where everything is.
There might even be a mime code for RSS feeds, such that if you open an RSS feed in your browser, it automatically figures out the correct application (i.e. your preferred RSS reader) to open that feed in. But I've not seen that actually implemented anywhere, which is a shame, because that seems like by far the best option for user experience.
XSLT as a feature is being removed from web browsers, which is pretty significant. Sure it can still be used in standalone tools and libraries, but having it in web browsers enabled a lot of functionality people have been relying on since the dawn of the web.
> hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away
So why not switch to a better maintained and more secure implementation? Firefox uses TransforMiix, which I haven't seen mentioned in any of Google's posts on the topic. I can't comment on whether it's an improvement, but it's certainly an option.
> The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
Really? How about a trillion dollar corporation steps up to sponsor the lone maintainer who has been doing a thankless job for decades? Or directly takes over maintenance?
They certainly have enough resources to maintain a core web library and fix all the security issues if they wanted to. The fact they're deciding to remove the feature instead is a sign that they simply don't.
And I don't buy the excuse that XSLT is a niche feature. Their HTML bastardization AMP probably has even less users, and they're happily maintaining that abomination.
> It seems like something an extension ought to be capable of
I seriously doubt an extension implemented with the restricted MV3 API could do everything XSLT was used for.
> and if not, fix the extension API so it can.
Who? Try proposing a new extension API to a platform controlled by mega-corporations, and see how that goes.
It's encouraging to see browsers actually deprecate APIs, when I think a lot of problems with the Web and Web security in particular is people start using new technologies too fast but don't stop using old ones fast enough.
That said, it's also pretty sad. I remember back in the 2000s writing purely XML websites with stylesheets for display, and XML+XSLT is more powerful, more rigorous, and arguably more performant now in the average case than JSON + React + vast amounts of random collated libraries which has become the Web "standard".
But I guess LLMs aren't great at generating XSLT, so it's unlikely to gain back that market in the near future. It was a good standard (though not without flaws), I hope the people who designed it are still proud of the influence it did have.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Yup, "been there, done that" - at the time I think we were creating reports in SQL Server 2000, hooked up behind IIS.
It feels this is being deprecated and removed because it's gone out of fashion, rather than because it's actually measurably worse than whatever's-in-fashion-today... (eg React/Node/<whatever>)
100%. I’ve been neck deep over the past few months in developing a bunch of Windows applications, and it’s convinced me that never deprecating or removing anything in the name of backwards incompatibility is the wrong way. There’s a balance to be struck like anything, but leaving these things around means we continue to pay for them in perpetuity as new vulnerabilities are found or maintenance is required.
What about XML + CSS? CSS works the exact same on XML as it does on HTML. Actually, CSS works better on XML than HTML because namespace prefixes provide more specific selectors.
The reason CSS works on XML the same as HTML is because CSS is not styling tags. It is providing visual data properties to nodes in the DOM.
Agreed on API deprecation, the surface is so broad at this point that it's nearly impossible to build a browser from scratch. I've been doing webdev since 2009 and I'm still finding new APIs that I've never heard of before.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Awesome! I made a blog using XML+XSLT, back in high school. It was worth it just to see the flabbergasted look on my friends faces when I told them to view the source code of the page, and it was just XML with no visible HTML or CSS[0].
Some people seem to think XSLT is used for the step from DOM -> Graphics. This is not the first time I have send a comment implying that, but it is wrong. XSLT is for the step from 'normalized data' -> DOM. And I like it, that this can be done in a declarative way.
The "severe security issue" in libxml2 they mention is actually a non-issue and the code in question isn't even used by Chrome. I'm all for switching to memory-safe languages but badmouthing OSS projects is poor style.
It is also kinda a self-burn. Chromium an aging code base [1]. It is written in a memory unsafe language (C++), calls hundreds of outdated & vulnerable libraries [2] and has hundreds of high severity vulnerabilities [3].
Given Google's resources, I'm a little surprised they having created an LLM that would rewrite Chromium into Go/Rust and replace all the stale libraries.
Google is too cheap to fund or maintain the library they've built their browser with after its hobbyist maintainers got burnt out, for more than a decade so they're ripping out the feature.
Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.
Nobody is badmouthing open source. It's the core truth, open source libraries can become unmaintained for a variety of reasons, including the code base becoming a burden to maintain by anyone new.
And you know what? That's completely fine. Open source doesn't mean something lives forever
The issue in question is just one of the several long-unfixed vulnerabilities we know about, from a library that doesn't have that many hands or eyes on it to begin with.
Maintaining web standards without breaking backwards compatibility is literally what they signed up for when they decided to make a browser. If they didn't want to do that job, they shouldn't have made one.
Chromium is open source and free (both as in beer and speech). The license says they've made no future commitments and made no warrants.
Google signed up to give something away for free to people who want to use it. From the very first version, it wasn't perfectly compatible with other web browsers (which mostly did IE quirks things). If you don't want to use it, because it doesn't maintain enough backwards compatibility... Then don't.
The license would be relevant if I'd claimed that removing XSLT was illegal or opened them up to lawsuits, but I didn't. The obligation they took on is social/ethical, not legal. By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.
iIRC, lack of IE compatibility is fundamentally different, because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.
> By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.
Literally this. Microsoft used to ship a free web browser. Then they stopped. That's not something anybody can object to.
> because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.
Standards aren't holy books. It's actually more important to support real customer use cases than to follow standards.
But you know this. If standards are more important that real use cases, then the fact that XSLT has been removed from the html5 standard is enough justification to remove it from Chrome.
> Literally this. Microsoft used to ship a free web browser. Then they stopped. That's not something anybody can object to.
There is a fundamental difference between ceasing to make a browser and continuing to make a browser, while not meeting your expectations as a browser maker.
> If standards are more important that real use cases, then the fact that XSLT has been removed from the html5 standard is enough justification to remove it from Chrome.
Browsers very much have not depreciated support for non-HTML5 markup (e.g. the HTML4 era <center> tag still works). This is because upholding devs and users expectation that standards compliant websites that once worked will continue to work is important.
The license is the way it is not by choice. We should be clear about that and acknowledge KHTML, and both Safari and Chromium origins. Some parts remain LGPL to this day.
Sounded like the maintainers of libxml2 have stepped-back, so there needs to be a supported replacement, because it is widely used. (Or if you are worried the reputation of "OSS", you can volunteer!)
Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?
To anyone who says to use JS instead of XSLT: I block JS because it is also used for ads, tracking and bloat in general. I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all).
I think being able to do client-side templating without JS is an important feature and I hope that since browser vendors are removing XSLT they will add some kind of client-side templating to replace it.
The percentage of visitors who block JS is extremely small. Many of those visits are actually bots and scrapers that don’t interpret JS. Of the real users who block JS, most of them will enable JS for any website they actually want to visit if it’s necessary.
What I’m trying to say is that making any product decision for the extremely small (but vocal) minority of users who block JS is not a good product choice. I’m sorry it doesn’t work for your use case, but having the entire browser ecosystem cater to JS-blocking legitimate users wouldn’t make any sense.
I block JS, too. And so does about 1-2% of all Web users. JavaScript should NOT be REQUIRED to view a website. It makes web browsing more insecure and less private, makes page load times slower, and wastes energy.
To put that in context, about 6 percent of US homes have no internet access at all. The “I turn off JS” crowd is at least 3x smaller than the crowd with no access at all.
The JS ship sailed years ago. You can turn it off but a bunch of things simply will not work and no amount of insisting that it would not be required will change that.
I’m not saying change is not possible. I’m saying the change you propose is misguided. I do not believe the entire world should abandon JS to accommodate your unusual preferences nor should everyone be obliged to build two versions of their site, one for the masses and one for those with JS turned off.
Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
> Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
Exactly. JS should be used to make apps. A blog is not an app. Your average blog should have 0 lines of JS. Every time I see a blog or a news article who's content doesn't load because I have JS disabled I strongly reconsider whether it's worth my time to read or not.
Did I say abandon? No. I said it should not be required. JavaScript should be supplementary to a page, but not necessary to view it. This was its original intent.
> JS is what has allowed websites to replace desktop apps in many cases.
Horribly at that, with poorer accessibility features, worse latency, abused visual style that doesn't match the host operating system, unusable during times of net outages, etc, etc.
> JavaScript should be supplementary to a page, but not necessary to view it.
I’m curious. Do Google Maps, YouTube, etc even work with JS off?
> This was its original intent.
Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
> Horribly at that
I disagree. You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in. I can load up a random web app and have high confidence that it can’t muck with my computer. I can’t do the same with random desktop apps.
> You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in.
is "every website now expects to run arbitrary code on the client's computer" really a more secure state of affairs? after high profile hardware vulnerabilities exploitable even from within sandboxed js?
from how many unique distributors did the average person run random untrusted apps that required sandboxing before and after this became the normal way to deliver a purely informational website and also basically everything started happening online?
People used to download way more questionable stuff and run it. Remember shareware? Remember Sourceforge? (Remember also how Sourceforge decided to basically inject malware that time?)
I used to help friends and family disinfect their PCs from all the malware they’d unintentionally installed.
> I’m curious. Do Google Maps, YouTube, etc even work with JS off?
I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
> Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
And that's why webshit is webshit.
> I can’t do the same with random desktop apps.
I can, and besides the point, why should anyone run random desktop apps? (Rhetorical question, they shouldn't.) I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
> I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
So no. Some major websites don’t actually work for you.
> And that's why webshit is webshit.
I don’t understand this statement. Webshit is webshit because the platform grew beyond basic html docs? At some point this just feels like hating on change. The web grew beyond static html just like Unix grew beyond terminals.
> I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
If this is the archetype of the person who turns off JS then I would bet the real percentage is way less than 1%.
I don't see how this makes the "JS availability should be the baseline" assumption any more legitimate. We make it possible to function in a society for those 6% of people. Low percentage still works out to a whole lot of people who shouldn't be left out.
I disagree. The world is under no obligation to cater to a tiny minority who self-select into reduced-functionality experiences.
It’s fine for you to turn off JS. It’s also fine for developers to require JS. Software has had minimum system requirements forever. I can’t run Android apps on my Palm Pilot from 2002 either and no one is obligated to make them work for me.
Without saying whether I think that's a good or bad thing, as a practical matter, I 100% agree. Approximately no major websites spend any effort whatsoever supporting non-JS browsers today. They probably put that in the class of text only browsers, or people who override all CSS: "sure, visitors can do that, but if they've altered their browser's behavior then what happens afterward is on them."
And frankly, from an economic POV, I can't blame them. Imagine a company who write a React-based website. (And again, I'm not weighing in on the goodness or badness of that.) Depending on how they implemented it, supporting a non-JS version may literally require a second, parallel version of the site. And for what, to cater to 1-2% of users? "Hey boss, can we triple our budget to serve two versions of the site, kept in lockstep and feature identical so that visitors don't scream at us, to pick up an extra 1% or 2% of users, who by definition are very finicky?" Yeah, that's not happening.
I've launched dozens of websites over the years, all of them using SSR (or HTML templates as we called them back in the day). I've personally never written a JavaScript-native website. I'm not saying the above because I built a career on writing JS or something. And despite that, I completely understand why devs might refuse to support non-JS browsers. It's a lot of extra work, it means they can't use the "modern" (React launched in 2013) tools they're use to, and all without any compelling financial benefit.
The point of the poster you're responding to is that sites are built JS-first for 98-99% of users, and it takes extra work to make them compatible with "JavaScript should NOT be REQUIRED to view a website", and no one is going to bother doing that work for 1-2% of users.
Yeah... or...... maybe they should just build websites the proper way the first time around, returning plain HTML, perhaps with some JS extras. Any user-entered input needs to be validated again on the backend anyway, so client-side JS is often a waste.
This falls apart the moment you need to add rows to a table or show and hide things in response to values selected in a dropdown. Even the lightest JS app centered around a big form is going to become a huge pain in the ass for literally no benefit. In a company of 100 people, that <0.5% of people who disable JS could literally be one guy, or no one at all.
You can use CSS for interactive-esque things like that. Use JS for all I care, just don't make it mandatory. You /could/ refresh the page with new values. You /could/ paginate your flow. You won't, because you'd rather spend 50 hours getting your JS to work right, than 5 hours writing some PHP.
I _could_ also just write API endpoints and handle client-side interaction however I want. If your preferences are incompatible with mine, that's a tradeoff I'm choosing to make. I am doing the work, you see, and I can choose how I want to do it.
You ostensibly run some flavor of Linux. Do you also complain that macOS apps don't run on your machine? It seems to me like a similar argument: somebody has developed an application in some particular way, but your choices have resulted in that application not running on your machine. Your choices are not necessarily _wrong_, but they are of very little consequence to somebody who has developed an application with a particular environment/runtime in mind. Why should they have to make significant architectural changes to their application to support your non-standard choices?
Blocking first party Javascript is a form of lunacy that is so illogical I can only shake my head. Let's say the site runs XSLT in Javascript. Now what? There's nothing that can be done and yet you would ask for further accommodation.
Here is why this is abusive: You can always restrict the subset of the web platform you demand to a subset that is arbitrarily difficult or even impossible to support. No matter how much accommodation is granted, it will be all for naught, because some guy out there goes even further with blocking things and starts blocking CSS. Next thing you know there's a guy who blocks HTML and you're expected to render out your website as a SVG with clickable links.
Of note here is that the segment we're talking about is actually an intersection of two very small cohorts; the first, as you note, are people who don't own a television errr disable Javascript, and the second is sites that actually rely on XSLT, of which there are vanishingly few.
XSLT is being exploited right now for security vulnerabilities, and there is no solution on the horizon.
The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
As they say, security is not a product, it’s a process. The process we have for existing browser technologies is better. That process is better because more people use it.
But even if we were to try to consider the technologies in isolation, and imagine a timeline where things were different? I doubt whether XML+XSLT is the superior platform for security. If it had won, we’d just have a different nightmare of intermingled content and processing. Maybe more stuff being done client-side. I expect that browser and OS manufacturers would be warping content to insert their own ads.
>You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
> The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
Yes, they also have much more vulnerabilities, because browsers are JIT compiling JS to w+x memory pages. And JS continues to get more complex with time. This is just fundamentally not the case with XSLT.
We're comparing a few XSLT vulnerabilities to hundreds of JIT compiler exploits.
While JIT exploits represent a large share of vulnerabilities in JS engines, there are enough other classes of vulnerabilities that simply turning JIT off is not sufficient. (The same goes for simply turning JS off, the Web browser internal is complex enough even without JS.)
Turning off the JIT eliminates an entire class of vulnerabilities just by nature of how the JIT works.
Ironically, JIT JS is much more susceptible to buffer overflow exploits than even the C code that backs XSLT - because the C code doesn't use w+x memory pages!
Yeah, turning off the JS or Web eliminates an entire class of vulnerabilities just by nature of how the JS or Web works (running untrusted code or showing untrusted content in the local machine) as well. That's no surprise.
The problem with JS isn't running untrusted code. That's easy and solved, we've been doing that for decades.
The problem with the JIT is compiling instructions, writing them to memory pages, and then executing them. This means your memory MUST be w+x.
This is really, really bad. If you have any way to write to memory unsafely, you can write arbitrary code and then execute it. Not arbitrary JS code. Arbitrary instructions. In the browsers process.
Even C and C++ does not have this type of vulnerability. At best, you can overwrite the return pointer with a buffer overflow and execute some code somewhere. But it's not 1995 anymore. I can't just write shell code in the buffer and then naively jump back into the buffer.
> I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all)
Recent XSLT parser exploits were literally the reason this whole push to remove it was started, so this change will specifically be helping people in your shoes.
I feel like there's a bias here due to XSLT being negletted and hence not receiving the same powers as JS. If it did get more development in the browser I'm pretty sure it would get the same APIs that we hate JS for, and since it's already Turing complete chances are people will find ways to misuse it and bloat websites.
Makes me kind of sad. I started my carrier back in days when XHTML and co were lauded as the next thing. I worked with SOAP and WDSLs. I loved that one can express nearly everything in XML. And namespaces… Then came json and apart from being easier to read for humans I wondered why we switch from this one great exchange format to this half baked one. But maybe I’m just nostalgic. But every time I deal with json parsers for type serialization and the question how to express HashMaps and sets, how to provide type information etc etc I think back to XML and the way that everything was available on board. Looked ugly as hell though :)
json is sort of a gresham's law "bad money drives out the good" but for tech: lazy and forgiving technologies drive out the better but stricter ones.
bad technology seems to make life easier at the beginning, but that's why we now have sloppy websites that are an unorganized mess of different libraries, several MB in size without reason, and an absolute usability and accessibility nightmare.
xhtml and xml were better, also the idea separating syntax from presentation, but they were too intelligent for our own good.
> lazy and forgiving technologies drive out the better but stricter ones.
JSON is not "lazy and forgiving" (seriously, go try adding a comment to it).
It was just laser-focused on what the actual problem was that needed to be solved by many devs in day-to-day practice.
Meanwhile XML wanted to be an entire ecosystem, its own XML Cinematic Universe, where you had to adopt it all to really use it.
It's not surprising to me that JSON won out, but it's not because it's worse, it's actually much better than XML for the job it ended up being used for (a generic format to transfer state between running programs supporting common data structures with no extraneous add-ons or requirements).
XML is better for a few other things, but those things are far less commonly needed.
Don’t know if I would describe it as much better. I see it similar to the whole SQL -> NOSQL -> let’s add all the feature and end up with SQL. JSON undergo a similar story with the difference that we didn’t go back to XML. What I mean is to simplify and then realize what was actually missing.
But I agree for the smaller services and state transfer especially in web XML was just too damn big and verbose. But conceptually it was great.
For "I need this Python dict to exist in this unrelated JavaScript program's context space" JSON is absolutely much better. If only because you completely sidestep all the various XML foibles, including its very real security foibles.
JSON is so good at this that, like CSV, it does displace better tech for that use case, but the better tech isn't usually XML but rather things like Avro or Protobuf.
For the most part people don't add on XML features to JSON. Comments are a frequent addition, sometimes schemas, but for the most part the attraction of JSON is avoiding features of XML, like external entity validation, namespaces, getting to choose between SAX or DOM styles, or being required to support unrelated XML systems just to use another.
Again, there are problem domains where those are helpful, and XML is a good fit for those. But those problem spaces end up being much smaller in scale than the ones solved by JSON, Avro, Iceberg, etc.
But the whole point to JSON is to be nearly as dumb simple as possible so that the complexity of the problem domain will necessarily be handled in a real programming language, not by the data magically trying to transform itself.
I like XSLT, and I’ve been using the browser-based APIs in my projects, but I must say that XSLT ecosystem has been in a sad state:
- Browsers have only supported XSLT 1.0, for decades, which is the stone age of templating. XSLT 3.0 is much nicer, but there’s no browser support for it.
- There are only two cross-platform libraries built for it: libxslt and Saxon. Saxon seriously lacks ergonomics to say the least.
One option for Google as a trillion dollar company would be to drive an initiative for “better XSLT” and write a Rust-based replacement for libxslt with maybe XSLT 3.0 support, but killing it is more on-brand I guess.
I also dislike the message “just use this [huge framework everyone uses]”. Browser-based template rendering without loading a framework into the page has been an invaluable boon. It will be missed.
If you are using XSLT to make your RSS or atom feeds readable in a browser should somebody click the link you may find this post by Jake Archibald useful: https://jakearchibald.com/2025/making-xml-human-readable-wit... - it provides a JavaScript-based alternative that I believe should work even after Chrome remove this feature.
I think that's sad. XSLT is in my point a view a very misunderstood technology. It gets hated on a lot. I wonder if this hate is by people who actually used and understood it, though. In any case, more often than not this is by people who in the same sentence endorse JavaScript (which, by any objective way of measuring is just a language far more poorly designed).
IMO XSLT was just too difficult for most webdevs. And IMO this created a political problem where the 'frontend' folks needed to be smarter than the 'backend' generating the XML in the first place.
XSLT might make sense as part of a processing pipeline. But putting it on front of your website was just an unnecessary and inflexible layer, so that's why everyone stopped doing it. (except rss feeds and etc.)
I'm not much of a programmer, but XSLT being declarative means that I can knock out a decent-looking template without having to do a whole lot of programming work.
Au contraire: the more you understand and use XSLT, the more you hate it. People who don't understand it and haven't used it don't have enough information and perspective to truly hate it properly. I and many other people don't hate XSLT out of misunderstanding at all: just the opposite.
XSLT is like programming with both hands tied behind your back, or pedaling a bicycle with only one leg. For any non-trivial task, you quickly hit a wall of complexity or impossibility, then the only way XSLT is useful is if you use Microsoft's non-standard XSLT extensions that let you call out to JavaScript, then you realize it's so easy and more powerful to simply do what you want directly in JavaScript there's absolutely no need for XSLT.
I understand XSLT just fine, but it is not the only templating language I understand, so I have something to compare it with. I hate XSLT and vastly prefer JavaScript because I've known and used both of them and other worse and better alternatives (like Zope Page Templates / TAL / METAL / TALES, TurboGears Kid and Genshi, OpenLaszlo, etc).
>My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
>Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
You should really try some of the modern alternatives. Don't let Angular and React's templating systems poison you, give Svelte a try!
Even just plain JavaScript is much better and more powerful and easier to use than XSLT. There are many JavaScript libraries to help you with templates. Is there even any such thing as an XSLT library?
Is there some reason you would prefer to use XSLT than JavaScript? You can much more easily get a job yourself or hire developers who know JavaScript. Can you say the same thing for XSLT, and would anyone in their right mind hire somebody who knows XSLT but refuses to use JavaScript?
XSLT is so clumsy and hard to modularize, only good for messy spaghetti monoliths, no good for components and libraries and modules and frameworks, or any form of abstraction.
And then there's debugging. Does a good XSLT debugger even exist? Can it hold a candle to all the off-the-shelf built-in JavaScript debuggers that every browser includes? How do you even debug and trace through your XSLT?
I think the fundamental disconnect here is that you're assuming that I am a developer. I'm not, I'm a lousy developer. It's not for lack of trying, programming just doesn't click for me in the way that makes learning it an enjoyable process.
XSLT is a good middle ground that gave me just enough rope to do some fun transformations and put up some pages on the internet without having to set up a dev environment or learn a 'real' programming language
As for the lack of libraries. Technically they are possible, but are not used that often. Maybe they are not that necessary. XML's idea is a federation of notations. Each notations is semantic, that is it makes only specific distinctions. XSLT transforms between two such notations. Since each combination is unique, there is little to reuse. Where other tools use libraries XSLT uses separate stylesheets and just rearranges them differently.
In Saxon you write templates that give you partial results and just call them with the -it:name switch. So if you have a four-step transform you can examine the results of each step in isolation. In libxslt that only supports 1.0 you can do this with parameters, although this is less convenient.
You can trace with xsl:message. Step-by-step debug is not there, granted; but it is definitely the last resort tool even when it is available.
Well said. I wrote an XSLT based application back in the early 2000s, and I always imagined the creators of XSLT as a bunch of slavering demented sadists. I hate XSLT with a passion and would take brainfuck over it any day.
Hearing the words Xalan, Xerces, FOP makes me break out in a cold sweat, 20 years later.
That's upsetting. Being able to do templating without using JavaScript was a really cool party trick.
I've used it in an unfinished website where all data was stored in a single XML file and all markup was stored in a single XSLT file. A CGI one-liner then made path info available to XSLT, and routing (multiple pages) was achieved by doing string tests inside of the XSLT template.
In my opinion this is not “we agree lets remove it”. This is “we agree to explore the idea”.
Google and Freed using this as a go ahead because the Mozilla guy pasted a pollyfill. However it is very clearly NOT an endorsement to remove it, even though bad actors are stating so.
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support1. If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
Freed et al also explicitly chose to ignore user feedback for their own decision and not even try to improve XSLT security issues at the cost of performance.
Yeah all these billion dollar corporations that can’t be bothered see it as the only path forward not because of technological or practical issues, but because none of them can be asked to give a shit and plan it into their budgets.
They’re MBAs who only know how to destroy and consolidate as trained.
I’m a modern developer and I see it as valuable. Why side with the browser teams and ignoring user feedback?
If “modern developers” actually spent time with it, they’d find it valuable. Modern developers are idiots if their constant cry is “just write it in JS”.
No idea what’s inaccurate about this. A billion dollar company that has no problem pivoting otherwise, can’t fund open technology “because budgets” is simply a lie.
I'm not personally in the business of maintaining a browser.
But if I were, and I were looking to decrease cost of maintenance, "This entire rendering framework that supports a 0.02% use case" would be an outlier for chopping-block consideration. Not all corner-case features match that combination of cost-to-maintain and adoption (after, what, decades at this point?).
We wouldn't be arguing the point if the feature in question were fax machine support, right?
Yes we would because people still use fax machines.
I don’t understand this “everything must be a business metric because it can, therefor if I can whittle any feature as a small minority, I am forever correct and just in destroying said technology. Look at smart and savvy I am.”
Browser developers don’t have to do shit but it’s against the idea of an open web to kill off technology, especially one that’s A PART OF THE HTML STANDARD.
You love a closed web. That’s why you’re backing Google and arguing for this. I can’t change that there are so many weak minded “yes daddy Google” people in the world.
All I can do is advocate we support many technologies and ideas. The world you are advocating sounds locked down and uninteresting.
The proposal is to remove support and change the standard. Standards evolve. Sometimes features are removed because they are costly to maintain or security problems or both.
The fact is that all the major browsers are looking to deprecate this functionality because they all agree it’s a security bug farm and too underutilized to justify fixing.
> You love a closed web. That’s why you’re backing Google and arguing for this. I can’t change that there are so many weak minded “yes daddy Google” people in the world.
Don’t do this. We can just disagree without resorting to strawman and ad hominem attacks. No one insulted you for holding your opinion.
Before this my mental model of you was an engineer who’s frustrated that he’s going to have to do work to deal with the deprecation. I can empathize with that even if I think you are wrong in believing that browsers should invest further in support of xslt. Now I realize you just lack empathy for other engineers who are also forced to make real world trade offs. The fact that you happen to use xslt in the browser does not make it important relative to all the other features browsers support.
> Sometimes features are removed because they are costly to maintain or security problems or both.
But the features and tech doesn’t have security problems. The library implementation does. This is exactly the kind of bad faith argument I’m talking about. Please just for one second try making an argument pro-XSLT and then try to compare the two mind sets about technology here.
There is no negative trade off by maintaining XSLT other than not being lazy developers. I have no empathy for people who hide behind billion dollar corporations and do their bidding. This is not some sort of critical situation, this is a Google engineer doing things because it’s easier, not “right” or “the difficult choice”.
> There is no negative trade off by maintaining XSLT other than not being lazy developers.
Only because it’s not your money or time being traded. Yes, if we pretend that engineering effort is free then there’s no reason Google couldn’t just rewrite this entire library in Rust or whatever. But if that were true you would just rewrite the library yourself and send the pull request to Chromium.
In the real world where engineering costs time and money, every decision is a trade off. Someone rewriting libxslt to be secure is someone who’s not implementing other features and who’s not fixing other bugs on the backlog.
Resources allocated to Chromium are finite and while sure, Google could hire 2 more engineers to do this, in reality those 2 new engineers could and would be assigned to higher priority work.
> this is a Google engineer doing things because it’s easier, not “right” or “the difficult choice”.
You keep blaming Google specifically. All of the major browsers are planning to drop this though. They all agree this is the right trade off.
Surprise, there’s already an effort to write xml related libraries in Rust: xrust library for one.
It doesn’t have to be this pearl gripping bitching and moaning about budgets and practicality. That’s just what Google wants you to believe.
No all of the major browsers weren’t planning to drop this. It literally only started happening because of Google. And Google is essentially forcing the hand. Again this is bad faith argument. I will not concede on my thoughts on this unnecessary destruction of XSLT in the browser while other technologies get a pass.
I don't see how it can be reasonably asserted that this is "destruction of XSLT in the browser" when there are multiple XSLT conversion engines available in JavaScript.
Because when you convert xslt to javascript it’s not xslt is it? It’s javascript pretending to xslt. Furthermore if you restrict rendering xslt to javascript then you lose all the performance benefits of xslt at the engine level.
Also if I’m going to be using Javascript why would I then reach for xslt instead of the other 8000 templating libraries.
All your “its fine to remove it” arguments only work if you ignore all the reasons it’s not fine to remove it. That’s awfully convenient.
You'd be replacing the native-implemented XSLT engine in the browser with a JavScript-implemented XSLT engine living in the JavaScript sandbox, not replacing XSLT with JavaScript.
There's a theorem with Turing's name on it that clarifies why that's equivalent.
> if you restrict rendering xslt to javascript then you lose all the performance benefits of xslt at the engine level.
Nowadays, I'd have to benchmark before just assuming the native implementation is faster. Especially if one of the issues is libxslt is under-maintained. JS is quite fast these days (and the polyfill investigated in the OP is wasm, not even JS). This problem can also be solved by moving the rendering server-side, in which case your ceiling for speed is much higher (and you're not spending your client's CPU on finishing the rendering of your page; added bonus in this era of mobile, battery-constrained devices).
> Also if I’m going to be using Javascript why would I then reach for xslt instead of the other 8000 templating libraries.
Great point. Why do we have this one arbitrary meta-language for declarative transformation of one datatype that's "blessed" with a native implementation? That's an odd corner-case, isn't it. We should probably simplify by putting it on-par with other solutions for that problem.
Google isn’t forcing anyone’s hand here. They are removing functionality. Everyone else could just not do that and maintain compatibility if they believed it was valuable to do so.
I don’t know why you have a chip on your shoulder for Google but sure. Yes, Google is clearly doing this purely because they are evil and removing this little-used tech is the key to cementing their stranglehold on the internet. Yes, Google is strong-arming poor Apple and Mozilla into this. Yes, everyone who disagrees with you is both a complete moron and a “daddy Google” fanboy.
There's a lot of people saying "All we need to do is maintain libxslt" and a distinct lack of people actually stepping up to maintain libxslt.
I, for one, won't. Not for the purpose of keeping it in the browser. There are just too many other ways to do what it does for me to want that (and that's before we get into conversations about whether I want to be working with XML in the first place on the input side).
> not being lazy developers
Laziness is one of the three great virtues of programming. Every second not spent maintaining libxslt is a second we can spend on something more people care about.
It's a rule for writers, but it applies to software architecture also: kill your darlings.
Sure. But since this announcement I’ve been planning ways to support xslt. Here are the projects I’m considering:
- iced-ui browser with xslt + servo
- contribute to xslt xrust project
- investigate sandboxing xslt in firefox and implementing xrust
- same as the previous but for Chrome
- have an llm generate 1000s of static websites built in xslt and deploy them across the internet
- promote and contribute to the recent Show HN post of an xslt framework
I figure if I can generate enough effort to spam the web with xslt sites then they can’t remove it. Ultimately the first goal is to delay the removal from Chrome. This will delay the removal in other browsers. I don’t care if it’s ineffective. It’s better than doing nothing.
So you can take your lazy tenants of software engineering and your “hyuck Google knows best” eslewhere.
No you don’t because my passions are for a open technically varied unbroken web and you’ve spent the entire time trying to paint me as crazy for wanting any of that just because not all technology is equally popular or profitable for Google.
With respect: I've never called you crazy, nor did I imply it, nor did I mean to imply it.
I think your cost / benefit analysis on maintaining a native in-browser implementation of an old, niche declarative transformation language for a hard-to-read data format that hasn't been the dominant model of sending data to browser clients for at least fifteen years is flawed, but it's not crazy. Reasonable people can certainly put their priorities in different places, and I respect our priorities don't align on this topic and you have a right to your opinion.
Open web means popular browsers supporting a wide range of technologies that institutions, businesses, and people use.
Not: popular browsers needle through technologies and tell everyone they know best
Does that make sense? Openess on the web isn’t a new term or concept so I’m not sure what’s confusing. It’s certainly not killing off technologies people are using.
What is open web to you? “Overpaid Mba at Google says this is best so you better fall in line.”
> Open web means popular browsers supporting a wide range of technologies that institutions, businesses, and people use.
The “wide range of technologies” is not what makes the web “open”.
The openness comes from the fact that anyone can write web sites and anyone can write a browser that can render the same websites that chrome can render. “More features” does not imply “more open”.
Dropping support for xslt would make the web less open if it were being replaced by some proprietary Google tech. But that’s not what’s happening here at all.
> Not: popular browsers needle through technologies and tell everyone they know best
How else would it possibly work? Everyone has to actively choose the features they will build and support.
I don’t care to continue this discussion primarily because you are making nearly the same points as two other commenters and it has become a three way exhausting conversation. You hate XSLT or something and love Google, congrats, you win this discussion. XSLT will be removed, Javascript will reign king. You will be happy. Every one will say Mason Freed is right and smart and that XML sucks because no one who matters uses it. I was never going to convince you to like or consider other technologies, anyways. And since that is true, this conversation doesn't do anything to try and help save XSLT and not worth continuing for me at least.
I wish it weren't the case but good luck and I'm sure we'll speak again with nearly the same conversation at the next thread for a standard deprecation that ad companies don't like.
Bold of you to assume other commenters in this thread have no experience with XML or XSLT.
I was there when it was the new hottest thing and I was there when it became last year's thing. These things come and go, and this one's time has come.
Given that the thing we want to support can be supported via server-side translation, client-side translation in JavaScript, or automatic detection and translation via a third-party plugin in JavaScript... What bearing on the open web does it have to preserve this functionality as a JavaScript-accessible API built into the browser?
I don't see how removing this API harms the open web. I do see how it imposes some nonzero cost on website maintainers to adapt to the removal of the API, and I respect that cost is nonzero.
... but I also watched OpenGL evolve into OpenGL ES and the end result was a far better API than we started with.
I don't think XSLTProcessor joining document.defaultCharset in the history books is damage to the open web.
ETA: And not that it matters over-much, but the "overpaid Mba" is an electrical engineering Ph.D who came to software via sensor manufacturing, so if you're going to attempt to insult someone's training, maybe get it right? Not that he'd be wrong if he were an Mba either, mind.
No the Mba is Freeds boss who installed him there and ensure he enforces closing of the web. Coming from sensor manufacturing to software isn’t really that impressive but it does make sense why a sensor manufacturing engineer would make arguments for removing a spec like XSLT but not a terribly complicated and security vulnerable spec like bluetooth. Which probably has 10x the complexity and 10x security plane of xslt. Thank you this whole thing is an even bigger joke.
Enjoy the precedent this sets for other tech not in the Google stable. You clearly are getting what you want so why continue this discussion. Who are you trying to convince?
> Which probably has 10x the complexity and 10x security plane
... and 10x the utility, since unlike XSLT Bluetooth requires sandboxed and mediated access to OS-level APIs that cannot be feature-compatible replicated with 3MB of JavaScript.
The question for all the browser developers is not “can we feasibly support this feature” but “is it worth it to support this feature”?
Because they must address the security problems, there is no zero-cost solution to maintain compatibility. They either abandon it or rewrite it which comes with support costs forever.
I understand you believe they made the wrong choice and I understand why you feel that way. But according to their calculus they are making the right choice based on how widely used the feature actually is.
I believe you, but I think I missed that part of the conversation.
Running an XSLT engine in JavaScript is sandboxed. It's sandboxed by the JS rules. In terms of security, it's consolidating sandboxing concerns because risk of breaking XSLT becomes risk of breaking the JS engine, whereas right now there are two potential attack vectors to monitor.
(There is an unwritten assumption here: "But I can avoid the JS issues by turning off JavaScript." Which is true, but I think the ship is pretty well sailed for any w3c-compliant browser to be supporting JavaScript-off as a first-class use case these days. From a safety standpoint, we even have situations where there are security breach detections against things like iframe-busting that only work with JavaScript on).
Gopher support in browser was never, IIUC, a w3c standard.
Piecing the puzzle pieces together from multiple threads:
There's an argument to be made that the HTML standard, which post-dates the browser wars and was more-or-less the detente that was agreed upon by the major vendors of the day, includes a rule that no compliant browser should drop a feature (no matter how old or unused that feature is) because "Don't break the web." In other words: it doesn't matter if there's zero users of something in the spec; if it's in the spec and you claim to be compliant, you support it.
XSLT has been a W3C recommendation since 1999 and XSLT2 and 3 were added later, and no W3C process has declared it dead. But several browser engines are declaring it's too cumbersome to maintain so they're planning to drop it. This creates an odd situation because generally a detente has held standards in place: you don't drop something people use because users won't perceive the sites that use the tech as broken, they'll perceive your browser is broken and go use someone else's browser.
... except that so many vendors are choosing to drop it simultaneously, and the tech is so infrequently used, that there's a good chance this drop will de-facto kill XSLT client-side rendering as a technology web devs can rely upon regardless of what the spec says.
So people are concerned about a perceived shift in the practical "balance of power" between the standards and the browser developers that (reading between the lines) could usher in the bad old days of the Microsoft monopoly again, except that this time it's three or four browser vendors agreeing upon what the web should be and doing it without external input instead of Microsoft doing what it wants and Firefox fighting them / fighting to keep up. Consolidation of most of the the myriad browsers to only be running on three engines enables this.
> that there's a good chance this drop will de-facto kill XSLT client-side rendering as a technology web devs can rely upon regardless of what the spec says.
This is coming out of WHATWG so in actuality the spec itself is being updated to remove the functionality. So yes, the end state is very much that devs cannot rely on this functionality.
From there they link to the minutes for the meeting where this was raised. Interestingly the Google engineer who raised this at the meeting was formerly at Mozilla for years. I don’t know if Mozilla was already looking to remove this or not.
Agreed; as a technology, it's both clever and fun. I learned it right around the time I first touched functional programming in general and it was neat to see how you could build a chain of declarative rules to go from one document to another document.
Personally, I don't think we need a dedicated native-implemented browser engine for it. But in general I'm glad the tech exists.
Ah, so this is removing libxslt. For a minute I thought XSLT processing was provided by libxml2, and I remembered seeing that the Ladybird browser project just added a dependency on libxml2 in their latest progress update https://ladybird.org/newsletter/2025-10-31/.
I'm curious to see what happens going forward with these aging and under-resourced—yet critical—libraries.
So basically browsers had this [..] the question now is there is no investment in this. None. And there hasn't been for a really long time from the browser's perspectives.
XSLT shows up then to be very robust technology that survived the test of time already - if for decades (!) regardless of lack of support, investment, with key browser bugs not fixed by purpose stuck at version 1.0 - it's still being used in that working part - and if used it holds up well and last, in meantime, elsewhere:
XPath And XSLT continue to evolve. They've really continued to evolve. And people are currently working on an XSLT-4.
And because it's a declarative way of transforming trees and collections of trees. And declarative means you don't say how to do it. You say, 'This is what I want'..
.. it's timeless: _abstracted definition_ to which imperative solutions could be reduced in the best case
- with unaware of that authors repetitively trying (and soon having to) to reimplement that "not needed" ( - if abstracted already out ! - ) part ( ex. https://news.ycombinator.com/item?id=45183624 ) - in more or less common or compatible ways
- so, better keep it - as not everybody can afford expensive solutions and there are nonprofits too that don't depend on % of money wasted repeating same work and like to KISS !
I know it makes me an old and I am biased because one of the systems in my career I am most proud of I designed around XSLT transformations, but this is some real bullshit and a clear case why a private company should not be the de facto arbiter of web standards. Have a legacy system that depends on XSLT in the browser? Sucks to be you, one of our PMs decided the cost-benefit just wasn't there so we scrapped it. Take comfort in the fact our team's velocity bumped up for a few weeks.
And yes I am sour about the fact as an American I have to hope the EU does something about this because I know full-well it's not happening here in The Land of the Free.
I don't use XSLT and don't object to this, but seeing "security" cited made me realize how reflexively distrustful I've become of them using that justification for a given decision. Is this one actually about security? Who knows!
Didn't this come pretty directly after someone found some security vulns? I think the logic was, this is a huge chunk of code that is really complex which almost nobody uses outside of toy examples (and rss feeds). Sure, we fixed the issue just reported, but who knows what else is lurking here, it doesn't seem worth it.
As a general rule, simplifying and removing code is one of the best things you can do for security. Sure you have to balance that with doing useful things. The most secure computer is an unplugged computer but it wouldn't be a very useful one; security is about tradeoffs. There is a reason though that security is almost always cited - to some degree or another, deleting code is always good for security.
The vulnerabilities themselves often didn't really affect Chrome, but by the maintainers' own admission the code was never intended to be security critical. They got burned out after a series of vulnerability reports with publication deadlines made them decide to just take security bugs like normal bugs so the community could help fix things. That doesn't really fit with the "protect users by keeping security issues ecret for three months" approach corporations prefer. Eventually the maintainers stepped down.
Neither Google nor Apple were willing to sponsor a fork of the project but clearly they can't risk unmaintained dependencies in their billion dollar product, so they're rushing to pull the plug.
"Who knows what's lurking there" is a good argument to minimize attack surface, but Google has only been adding more attack surface over the past couple of years. I find it hard to defend that processing a structured document should be outside of a browser's feature set, but Javascript USB drivers and serial ports are necessary to drive the web. The same way libxml2 was never intended to be security critical, many GPU drivers were never written to protect from malicious programs, yet WebGPU and similar technology i being pushed hard and fast.
If we're deleting code to improvve against theoretical security risks, I know plenty of niche APIs that should probably be axed.
> the maintainers' own admission the code was never intended to be security critical
I find this hard to believe. XML is a portable data serialization format. The entire point of XML is to transfer data between separate parties who preumably dont trust each other. Most non browser xml usages are security criical and have been from the beginning.
Second, i don't know what libxml has to do with anything. I know the libraries are related but browsers are not removing libxml they are removing libxslt.
> Neither Google nor Apple were willing to sponsor a fork of the project but clearly they can't risk unmaintained dependencies in their billion dollar product, so they're rushing to pull the plug.
Why would they sponser a fork? I wouldn't in their position. A fork solves the maitenance issue but not the other problems. The lack of maintainers exasperbate the problem, but it is not the problem.
There is this weird entitlement people have with google where people expect google to randomly provide open source projects with free labour even if its not inline with google's interests.
If google was demanding someone else step up to maintain, then saying they should do it would be reasonable. But they aren't doing that. After all a new maintainer did step up, but the deprecation is still happening as it wasnt solely about that.
> but Javascript USB drivers and serial ports are necessary to drive the web
I also think these are stupid features, but their risk profile is probably quite a bit lower than xslt. Not all code has the same risk factors. What sort of api is exposed, how its exposed, code quality, etc all matter.
Ultimately though its google's web browser. People have this weird entitlement with google where they expect google to make specific product decisions. Its open source software. The solution to google making bad product decisions is the right to fork. If the decisions are that bad, the fork will win. Having google design their software by a comittee, where a comittee is the entire internet,is not reasonable nor would it make a good product.
> As a general rule, simplifying and removing code is one of the best things you can do for security.
Sure, but that’s not what they’re doing in the big picture. XSLT is a tiny drop in the bucket compared to all the surface area of the niche, non-standard APIs tacked onto Chromium. It’s classic EEE.
My understanding is that contrary to popular opinion it is firefox not chrome that originally pushed for the removal, so i dont know how relavent that is. It seems like all browser vendors are in agreement on xslt.
that said, xslt is a bit of a weird api in how it interacts with everything. Not all apis are equally risky and i suspect xslt is pretty high up there on the risk vs reward ratio.
There are security issues in the C implementation they currently use. They could remove this without breaking anything by incorporating the JS XSLT polyfill into the browser. But they won't because money.
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
It's true that there are security issues, but it's also true that they don't want to put any resources into making their XSLT implementation secure. There is strong unstated subtext that a huge motivation is that they simply want to rip this out of Chrome so they don't have to maintain it at all.
I'd never written any XSL before last week when I got the crazy idea of putting my resume in XML format and using stylesheets to produce both resume and CV forms, redacted and not, with references or not. And when I had it working in HTML, I started on the Typst stylesheet. xsltproc, being so old, basically renders the results instantly. And Typst, being so new, does as well.
XSLT seem like it could be something implemented with WebAssembly (and/or JavaScript), in an extension (if the extension mechanism is made suitable; I think some changes might be helpful to support this and other things), possibly one that is included by default (and can be overridden by the user, like any other extension should be); if it is implemented in that way then it might avoid some of the security issues. (PDF could also be implemented in a similar way.)
(There are also reasons why it might be useful to allow the user to manually install native code extensions, but native code seems to be not helpful for this use, so to improve security it should not be used for this and most other extensions.)
The lead dev driving the Chrome deprecation built a wasm polyfill https://github.com/mfreed7/xslt_polyfill. Multiple people proposed in the Github discussions leading up to this that Google simply make the polyfill ship with Chrome as an on-by-default extension that could be disabled in settings, but he wouldn't consider it.
panos: next item, removing XSLT. There are usage numbers.
stephen: I have concerns. I kept this up to date historically for Chromium, and I don't trust the use counters based on my experience. Total usage might be higher.
dan: even if the data were accurate, not enough zeros for the usage to be low enough.
mason: is XSLT supported officially?
simon: supported
mason: maybe we could just mark it deprecated in the spec, to make the statement that we're not actively working on it.
brian: we could do that on MDN too. This would be the first time we have something baseline widely available that we've marked as removed.
dan: maybe we could offer helpful pointers to alternatives that are better, and why they're better.
panos: maybe a question for olli. But I like brian's suggestion to mark it in all the places.
dan: it won't go far unless developers know what to use instead.
brian: talk about it in those terms also. Would anyone want to come on the podcast and talk about it? I'm guessing people will have objections.
emilio: we have a history of security bugs, etc.
stephen: yeah that was a big deal
mason: yeah we get bugs about it and have to basically ignore them, which sucks
brian: people do use it and some like it
panos: put a pin in it, and talk with olli next time?
As for the rest of your [working for Google] comment. To put it simply, you come off as someone inexperienced, maybe I'm wrong and you have a big list of features you've successfully removed and public discussions you had in the process, if so, there's probably something to learn from those that's different here.
Removing html and css would also make the browser more secure - but I would argue also very counter productive for users.
Most of read-only content and lite editing can be achieved with raw data + xslt.
The web has become a sledgehammer for clacking a nut.
For example, with xslt you could easily render a read only content without complex and expensive office apps. That is enough for Academia, Gov, and small businesses.
If they really cared about "security" they would remove JS or try to encourage minimising its use. That is a huge attack surface in comparison, but they obviously want to keep it so they can shove in more invasive and hostile user-tracking and controlling functionality.
I do all of my browsing with Javascript disabled. I've done this for decades now, as a security precaution mainly, but I've also enjoyed some welcome side-effects where paywalls disappeared and ads became static and unobtrusive. I wasn't looking for those benefits but I'll take 'em. In stride.
I've also witnessed a welcome (but slow) change in site implementations over the years: there are few sites completely broken by the absence of JS. Still some give blank screens and even braindead :hidden attributes thrown into the <noscript> main page to needlessly forbid access... but not as many as back in the day when JS first became the rage.
I don't know much about XSLT other than the fact that my Hiawatha web server uses it to make my directory listings prettier, and I don't have to add CSS or JS to get some style. I hate to see a useful standard abandoned by the big boys, but what can I do about it?
I bristle when I encounter pages with a few hundred words of content surrounded by literally megabytes of framework and flotsam, but that's the gig, right, wading through the crap to find the ponies.
It's a shame the browser developers are making an open, interoperable, semantic web more difficult. It's not surprising, though. Browsers started going downhill after they removed the status bar and the throbber and made scrollbars useless.
> Didn't this effort start with Mozilla and not Google?
Maybe round one of it like ten years ago did? From what I understand, it's a Google employee who opened the "Hey, I want to get rid of this and have no plans to provide a zero-effort-for-users replacement." Github Issue a few months back.
> It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.
I don't see any evidence of that claim from the materials I have available to me. [0] is the Github Issue I mentioned. [1] is the WHATNOT meeting notes linked to from that GH Issue... though I have no idea who smaug is.
It started with Mozilla, Apple, and Opera jumping ship and forming WHATWG. That stopped new XML related technologies from being adopted in browsers twenty years ago. Google is just closing the casket and burying the body.
Why would I forget about XSLT a really good technology pushed to the wayside by bad faith actors? Why would I forget Mason Freed? A person dedicating themselves to ruining perfectly good technology that needs a little love.
Do you have some sort of exclusive short term memory or something where you can’t remember someone’s name? Bizarre reply. Other people may have had a similarly lazy idea, but Mason is the one pushing and leading the charge.
It seems maybe you want me to blame this on Google as a whole but that would mean bypassing blame and giving into their ridiculous bs.
Well maybe you should at least start licking some boots because your attitude towards a templating language is the billion dollar corpo way. You fit right in and could at least get some perks out shitting on technology you probably never fully understood and you get to be a better contributor to helping close off the web.
All of the disliked technologies and people who use them will be gone one day as we champion these destructive initiatives onward. Thank you hater!
Blame Apple and Mozilla, too, then. They all agreed to remove it.
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
This is only repeated by people who have never used it.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
I used to use XSLT a lot, though it was a while ago.
You can use Javascript to get the same effect and, indeed, write your transforms in much the same style as XSLT. Javascript has xpath (still). You have a choice of template language but JSX is common and convenient. A function for applying XSLT-style matching rules for an XSLT push style of transform is only a few lines of code.
Do you have a particular example where you think Javascript might be more verbose than XSLT?
Who is transforming XML documents on the web? Most people produce HTML to begin with, so XSLT is a solution seeking a problem. If you really insist, you could just use XSLT via server side rendering.
I actually do have to work with raw XML and XSLTs every once in a while for a java-based CMS and holy hell, it's nasty.
Java in general... Maven, trying to implement extremely simple things in Gradle (e.g. only execute a specific Thing as part of the pipeline when certain conditions are met) is an utter headache to do in the pom.xml because XML is not a programming language!
It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
I agree though, "XML is not a programming language" and attempts to use it that way have produced poor results. You should have seen the `ant` era! But this is broader than XML - look at pretty much every popular CI system for "YAML is not a programming language".
That doesn't mean that XML isn't useful. Just not as a programming language.
But, that's what XSL is! XSL is a Turing-complete programming language in XML for processing XML documents. Being in XML is a big part of what makes XSL so awful to write.
XSL may be Turing-complete but it's not a programming language and wasn't intended to be one. It's a declarative way to transform XML. When used as such I never found it awful to write... it's certainly much easier than doing the equivalent in general purpose programming languages.
Maybe by analogy: There are type systems that are Turing complete. People sometimes abuse them to humorous effect to write whole programs (famously, C++ templates). That doesn't mean that type systems are bad.
XSL is a functional programming language, not a declarative language. When you xsl:apply-template, you're calling a function.
Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.
The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.
XSL's "escape hatch" is to allow arbitrary Turing-complete transformations. This was always intended to exist, to make easy transformations easy and hard transformations possible.
You basically never need to write Turing-complete code in a type system, but in any meaningful XSL project you will absolutely need to write Turing-complete XSL.
XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.
> It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
npm isn't even a build tool, it's a package manager and at that it's actually gotten quite decent - the fact that the JS ecosystem at large doesn't give a fuck about respecting semantic versioning or keeps reinventing the wheel or that NodeJS / JavaScript itself lacks a decent standard library aren't faults of npm ;)
Maven and Gradle in contrast are one-stop-shops, both build orchestrators and dependency managers. As for ant, oh hell yes I'm aware of that. The most horrid build system I encountered in my decade worth of tenure as "the guy who can figure out pretty much any nuclear submarine project (aka, only surfaces every few years after everyone working on it departed)" involved Gradle, which then orchestrated Maven and Ant, oh and the project was built on a Jenkins that was half DSL, half clicked together in the web UI, and the runner that executed the builds was a manually set up, "organically grown" server. That one was a holy damn mess to understand, unwind, clean up and migrate to Gitlab.
> look at pretty much every popular CI system for "YAML is not a programming language".
Oh yes... I only had the misfortune of having to code for Github Actions once in my life time, it's utter fucking madness compared to GitLab.
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
They are about libxslt but Mason Freed doesn’t want you to know that. They could contribute a rust project which has already implemented XSLT 1.0 thus matching the browsers. But that would good software engineering and logical.
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
Oh, that's the operative part? Accept my apologies. What I meant to say is, "I can see that you're deeply, deeply concerned about being able to continue beating your wife. I think you should reconsider your position on this matter."
No question mark, see? So I should be good now.
> Am I missing something here?
Probably not. People who engage in the sort of underhandedness linked to above generally don't do it without knowing that they're doing it. They're not missing anything. It's deliberate.
So, too, I would guess, is the case with you—particularly since your current reply is now employing another familiar underhanded rhetorical move. Familiar because I already called it out within the same comment section:
> The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
I seem to have personally offended you, and for that I am sorry.
This seems personal to you so I'll bow out of further discourse on the subject as it is not particularly personal to me. The websites I maintain use a framework to build RSS output, and the framework will be modified to do server-side translation or polyfill as needed to provide a proper HTML display experience for end-users who want that.
They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Only Google is pushing forward and twisting that message.
> They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Mozilla:
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support.
> WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support, though if there's a known list of origins that participate in a reverse origin trial we could perhaps participate sooner.
So you’re choosing to help them spin the lie by cherry picking comments.
The Mozilla comment itself ends with:
> If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
> If it turns out not to be possible to remove the feature, we’d like to replace our current implementation. The main requirements would be compatibility with existing web content, addressing memory safety security issues, and not regressing performance on non-XSLT content. We’ve seen some interest in sandboxing libxslt, and if something with that shape satisfied our normal production requirements we would ship it.
But the only way it’s possible to remove the feature is if you ignore everyone asking you to please not to remove it.
Therefor by totally ignoring push back you can then twist Mozilla reps words to mean the only option is to remove it.
Similarly with the Webkit comment:
> WebKit is cautiously supportive.
Both these orgs requested investigation not removal. Both expressed some concern and caution. Google did not, they only ever pushed forward with removing it. Even goong so far as to ignore the followup request to implement XSLT 3.0.
No it’s not blatantly untrue. It’s unblatantly misleading.
Furthermore I’d say for those specific comments, “go ahead and remove it”, the inverse is blatantly untrue.
If somebody says “our position is A but if that’s not possible we should do B”, it means they prefer A. It doesn’t mean they prefer B, and telling people that they prefer B when you know otherwise is dishonest.
The comment isn’t “our position is A” the comment is “our position is A if B,C,D aren’t possible for compatibility reasons”. Aka “if we must remove it then fine, else we would like to improve it”.
Google then side stepped all the compatibility concerns and notions to improve it and arguments against removing it so they could only address A.
Go ahead and argue for all the word twisting these bad actors have done. You won’t change my mind this is an obvious attack on the open web by people subservient to the ad tech industry. Prove to me it’s not when all the browsers depend on a platform for ad money. M
They have installed people like Mason Freed into these positions, who are incapable of reason, to carry this objective forward.
This is the problem with any C/C++ codebase, using rust instead would have been a better solution than just removing web standards from what is supposed to be a web browser.
> When that solution isn't wanted, the polyfill offers another path.
A solution is only a solution if it solves the problem.
This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.
The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.
Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?
> So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right?
As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
> As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
This is a (poor) attempt at gaslighting/retconning.
The phrase "Don't break the Web" is not original to this thread.
(I can't say I look forward to your follow-up reply employing sleights of hand like claims about how stuff like Flash that was never standardized, or the withdrawal of experimental APIs that weren't both stable/finalized and implemented by all the major browsers, or the long tail of stuff on developer.mozilla.org that is marked "deprecated" (but nonetheless still manages to work) are evidence of your claim and that browser makers really do have a history of doing this sort of thing. This is in fact the first time something like this has actually happened—all because there are engineers working on browsers at Google (and Mozilla and Apple) that are either confused about how the Web differs from, say, Android and iOS, or resentful of their colleagues who get to work on vendor SDKs where the API surface area is routinely rev'd to remove whatever they've decided no longer aligns with their vision for their platform. That's not what the Web is, and those engineers can and should go work on Android and iOS instead of sabotaging the far more important project of attending to the only successful attempt at a vendor-neutral, ubiquitous, highly accessible, substrate for information access that no one owns and that doesn't fuck over the people who rely on it being stable.)
Mozilla's own site expounds on "Don't Break the Web" as "Web browser vendors should be able to implement new web technologies without causing a difference in rendering or functionality that would cause their users to think a website is broken and try another browser as a result."
There is no meaningful risk of that here. The percentage of web users who are trying to understand content via XSLT'd RSS is nearly zero, and for everyone who is, there is either polyfill or server-side rendering to correct the issue.
> and those engineers can and should go work on Android and iOS
With respect: taken to its logical conclusion, that would be how the web as a client-renderable system dies and is replaced by Android and iOS apps as the primary portal to interacting with HTTP servers.
If the machine is so complex that nobody wants to maintain it, it will go unmaintained and be left behind.
> Mozilla's own site expounds on "Don't Break the Web" as
I'm a former Mozillian. I don't give a shit how it has been retconned by whomever happened to be writing copy that day, if indeed they have—it isn't even worth bothering to check.
"Don't break the Web" means exactly what it sounds like it means. Anything else is a lie.
> If the machine is so complex that nobody wants to maintain it, it will go unmaintained
There isn't a shortage of people willing to work on Web browsers. What there is is a finite amount of resources (in the form of compensation to engineers), and a set of people holding engineering positions at browser companies who, by those engineers' own admission, do not satisfy the conditions of being both personally qualified and equipped to do the necessary work here. But instead of stepping down from their role and freeing up resources to be better allocated to those who are equipped and willing, they're keeping their feet planted for the simple selfish reason that doing otherwise might entail a salary reduction and/or diminished power and influence. So they sacrifice the Web.
Sounds like you're volunteering to maintain the XSLT support in Firefox. That's great! That means when Chrome tries to decommission it, users who rely upon it will flock to Firefox instead because it's the only browser that works right with the websites they use.
It's a question about whether you still beat your wife. If the answer is yes, then your answer is yes, and if the answer is no, then your answer is no.
It sounds like you're looking for a jurisdiction where you can beat your wife.
At the very least it's (probably*) illegal whether you agree with it or not. Even where it's not illegal, physical violence against your partner isn't okay, even if they've angered you.
This is all entirely off-topic from using XSLT in a website. As opposed to rendering XSLT client-side or server side, or rendering it using an in-engine native-implementation API or a JavaScript engine.
Happy to talk about those if you want; not interested at all in this violence digression you've decided to go on.
"The reality is that for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise.
It's even worse than that, actually, because all of the things we've built aren't just not doing what we want, they're holding developers back. People build their applications on frameworks that _abstract out_ all the APIs we build for browsers, and _even with those frameworks_ developers are hamstrung by weird limitations of the web."
I think in the context of that link, they see React as a failing of the web. If the W3C/WHATWG/browser vendors had done a reasonable job of moving web technology forward, things like React would be far less necessary. But they spent all their time and energy working on things like <aside> and web components instead of listening to web developers and building things that are actually useful in a web developer’s day-to-day life. Front-end frameworks like React, on the other hand, did a far better job of listening to developers and building what they needed.
One of the biggest flaws of the current web technology stack is the unholy mix of concerns with respect to layout. Layout is handled by CSS and HTML in a way that drives people crazy. I recently wanted to have a scroll bar for super long text inside a table cell. Easy, right? Turns out you need to change the HTML to include a <div> inside the table cell, since there is no way to style the table cell itself and have it do what you expect it to do.
XLST doesn't solve this at all, since you're just generating more of the problem, i.e. more HTML and CSS. It feels like there should have been some sort of language exclusively for layout definition that doesn't necessarily know about the existence of HTML and CSS beyond selector syntax.
Your response is like seeing the cops going to the wrong house to kick in your neighbors door, breaking their ornaments in their entry way, and then saying to yourself, "Good. I hate yellow, and would never have any of that tacky shit in my house."
As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.
Yes but software, and especially browser, complexity has balooned enormously over the years. And while XSLT probably plays a tiny part in that, it's likely embedded in every Electron app that could do in 1MB what it takes 500 MB to do, makes it incrementally harder to build and maintain a competing browser, etc., etc. It's not zero cost.
I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.
Point to the part of your comment that has any-fucking-thing to do with the topic at hand (i.e. engages with the actual substance of the comment that it's posted as a reply to). Your comment starts with "Yes but", as if it to present it as a rebuttal or rejoinder to something that was said, but then proceeds into total non-sequitur. It's an unrestrained attempt at a change of subject and makes for a not-very-hard-to-spot type of misdirection.
Your neighbors' ugly yellow tchotchkes have in no way forced you—nor will they ever force you—to ornament your house with XSLT printouts.
Alright, you're extremely rude and combative so I'll probably tap out here.
But consider if the "yellow tchotchkes" draw some power from my house, produce some stinky blue smoke that occasionally wafts over, requires a government-paid maintenance person to occasionally stop by and work on, that I partly pay for with my taxes.
In contrast to poisoning the discussion with subtle conversational antimatter while wearing a veneer of amiability. My comments in this thread are not insidiously off-topic non-replies presented as somehow relevant in apposition to the ones that precede them.
> consider if the "yellow tchotchkes" draw some power from my house, produce some stinky blue smoke that occasionally wafts over, requires a government-paid maintenance person to occasionally stop by and work on, that I partly pay for with my taxes
Anyone making the "overwork the analogy" move automatically loses, always. Even ignoring that, nothing in this sentence even makes any sense wrt XSLT in the browser or the role of browser makers as stewards of stable Web standards. It's devoid of any cogent point and communicates no insight on the topic at hand.
Remove crappy JS APIs and other web-tech first before deprecating XSLT - which is a true-blue public standard. For folks who don't enable JS and XML data, XSLT is a life-saver.
If we're talking about removing things for security security, the ticking time bomb that is WebUSB seems top of the list to me of things that are dangerous, not actually standards (it is Chrome only), and yet a bunch of websites think it's a big, good reason to be Chrome-only.
Unquestionably the right move. From the various posts on HN about this, it's clear that (A) not many people use it (B) it increases security vulnerability surface area (C) the few people who do claim to use have nothing to back up the claim
The major downside to removing this seems to be that a lot of people LIKE it. But eh, you're welcome to fork Chromium or Firefox.
Chrome and other browsers could virtually completely mitigate the security issues by shipping the polyfil they're suggesting all sites depending on XSLT deploy in the browser. By doing so, their XSLT implementation would become no less secure than their javascript implementation (and fat chance they'll remove that). The fact that they've rejected doing so is a pretty clear indication that security is just an excuse, IMO.
I wish more people would see this. They know exactly how to sandbox it, they’re telling you how to, they’re even providing and recommending a browser extension to securely restore the functionality they’re removing!
The security argument can be valid motivation for doing something, but is utterly illegitimate as a reason for removing. They want to remove it because they abandoned it many years ago, and it’s a maintenance burden. Not a security burden, they’ve shown exactly how to fix that as part of preparing to remove it!
And it's a very small maintenance burden at that. Shipping the polyfil would technically still be a dependency, but about as decoupled a dependency as you can get. It's only interaction with the rest of the code would be through public APIs that browsers have to keep stable anyway.
Yes and no. It's true that if you had to pick one to support and were only considering security, it would virtually certainly be better to go with XSLT. However, browser makers basically _have_ to support javascript, so as long as XSLT has a non-zero attack surface (which it does, at least as long as it's a native implementation), including it would be less secure. That said, as I pointed out, there are obvious ways to mitigate this issue and reduce the extra attack surface to effectively zero.
I recently had an interesting chat with Liam Quin (who was on W3C's XML team) about XML and CDATA on Facebook, where he revealed some surprising history!
Liam Quinn in his award winning weirdest hat, also Microsoft's Matthew Fuchs' talk on achieving extensibility and reuse for XSLT 2.0 stylesheets, and Stephan Kesper̀̀'s simple proof that XSLT and XQuery are Turing complete using μ-recursive functions, and presentations about other cool stuff like Relax/NG:
How do we communicate the idea that declarative markup is a good idea? Declarative markup is where you identify what is there, not what it does. This is a title, not, make this big and bold. This is a a part number, not, make this blink when you click it - sure, you can do that to part numbers, but don't encode your aircraft manual that way.
But this idea is hard to grasp, for the same reason that WYSIAYG word processors (the A stands for All, What you see is all you get) took over from descriptive formatting in a lot of cases.
For an internal memo, for an insurance letter to a client, how much matters? Well, the insurance company has to be able to search the letters for specific information for 10, 20, 40, 100 years. What word processor did you use 40 years ago? Wordstar? Magic Wand? Ventura?
Liam Quin: hahaha i actually opposed the inclusion of CDATA sections when we were designing XML (by taking bits we wanted from SGML), but they were already in use by the people writing the XML spec! But now you’ve given me a reason to want to keep them. The weird syntax is because SGML supported more keywords, not only CDATA, but they were a security fail.
Don Hopkins: There was a REASON for the <![SYNTAX[ ]]> ?!?!? I though it was just some kind of tribal artistic expressionism, like lexical performance art!
At TomTom we were using xulrunner for the cross platform content management tool TomTom Home, and XUL abused external entities for internationalizing user interface text. That was icky!
For all those years programming OpenLaszlo in XML with <![CDATA[ JavaScript code sections ]>, my fingers learned how to type that really fast, yet I never once wondered what the fuck ADATA or BDATA might be, and why not even DDATA or ZDATA? What other kinds of data are there anyway? It sounds kind of like quantum mechanics, where you just have to shrug and not question what the words mean, because it's just all arbitrarily weird.
Liam Quin: haha it’s been 30 years, but, there’s CDATA (character data), replaceable character data (RCDATA) in which `é` entity definitions are recognised but not `<`, IGNORE and INCLUDE, and the bizarre TEMP which wraps part of a document that might need to be removed later. After `<!` you could also have comments, <!-- .... --> for example (all the delimiters in SGML could be changed).
Don Hopkins: What is James Clark up to these days? I loved his work on Relax/NG, and that Dr. Dobb's interview "The Triumph of Simplicity".
Note: James Clark is arguably the single most important engineer in XML history:
- Lead developer of SGMLtools, expat, and Jade/DSSSL
- Co-editor of the XML 1.0 specification
- Designer of XSLT 1.0 and XPath 1.0
- Creator of Relax NG, one of the most elegant schema languages ever devised
He also wrote the reference XSLT implementation XT, used in early browsers and toolchains before libxslt dominated.
James Clark’s epic 2001 Doctor Dobb's Journal "A Triumph of Simplicity: James Clark on Markup Languages and XML" interview captures his minimalist design philosophy and his critique of standards and committee-driven complexity (which later infected XSLT 2.0).
It touches on separation of concerns, simplicity as survival, a standard isn't one implementation, balance of pragmatism and purity, human-scale simplicity, uniform data modeling, pluralism over universality, type systems and safety, committe pathology, and W3C -vs- ISO culture.
He explains why XML is designed the way it is, and reframes the XSLT argument: his own philosophy shows that when a transformation language stops being simple, it loses the very quality that made XML succeed.
Nice find — interesting to see browsers moving to drop XSLT support.
I used XSLT once for a tiny site and it felt like magic—templating without JavaScript was freeing.
But maybe it’s just niche now, and browser vendors see more cost than payoff.
Curious: have any of you used XSLT in production lately?
I lead a team that manage trade settlements for hedge funds; data is exported from our systems as XML and then transformed via XSLT into whatever format the prime brokers require.
All the transformed are maintained by non-developers, business analysts mainly. Because the language is so simple we don't need to give them much training, just get IntelliJ installed on their machine, show them a few samples and let them work away.
Good, XSLT was crap. I wrote an RSS feed XSLT template. Worst dev experience ever. No one is/was using XSLT. Removing unused code is a win for browsers. Every anti bloat HNer should be cheering
The first few times you use it, XSLT is insane. But once something clicks, you figure out the kinds of things it’s good for.
I am not really a functional programming guy. But XSLT is a really cool application of functional programming for data munging, and I wouldn’t have believed it if I hadn’t used it enough for it to click.
Right. I didn't use it much on the client side so I am not feeling this particular loss so keenly.
But server side, many years ago I built an entire CMS with pretty arbitrary markup regions that a designer could declare (divs/TDs/spans with custom attributes basically) in XSLT (Sablotron!) with the Perl binding and a customised build of HTML Tidy, wrapped up in an Apache RewriteRule.
So designers could do their thing with dreamweaver or golive, pretty arbitrarily mark up an area that they wanted to be customisable, and my CMS would show edit markers in those locations that popped up a database-backed textarea in a popup.
What started off really simple ended up using Sablotron's URL schemes to allow a main HTML file to be a master template for sub-page templates, merge in some dynamic functionality etc.
And the thing would either work or it wouldn't (if the HTML couldn't be tidied, which was easy enough to catch).
The Perl around the outside changed very rarely; the XSLT stylesheet was fast and evolved quite a lot.
XSLT's matching rules allow a 'push' style of transform that's really neat. But you can actually do that with any programming language such as Javascript.
Actually a transformation system can reduce bloat, as people don't have to write their own crappy JavaScript versions of it.
Being XML the syntax is a bit convoluted, but behind that is a good functional (in sense of functional programming language, not functioning) system which can be used for templating etc.
The XML made it a bit hard to get started and anti-XML-spirit reduced motivation to get into it, but once you know it, it beats most bloaty JavaScript stuff in that realm by a lot.
I'm always puzzled by statements like this. I'm not much of a programmer and I wrote a basic XSLT document to transform rss.xml into HTML in a couple of hours. I didn't find it very hard at all (anecdotes are not data, etc)
Although it's sad to see an interesting feature go, they're not wrong about security. It's more important to have a small attack surface if this was maintained by one guy in Nebraska and he doesn't maintain it any more.
No, XSLT isn't required for the open web. Everything you can do with XSLT, you can also do without XSLT. It's interesting technology, but not essential.
Yes, this breaks compatibility with all the 5 websites that use it.
One extremely important XSLT use-case is for RSS/Atom feeds. Right now, clicking on a link to feed brings up a wall of XML (or worse, a download link). If the feed has an XSLT stylesheet, it can be presented in a way that a newcomer can understand and use.
I realize that not that many feeds are actually doing this, but that's because feed authors are tech-savvy and know what to do with an RSS/Atom link.
But someone who hasn't seen/used an RSS reader will see a wall of plain-text gibberish (or a prompt to download the wall of gibberish).
XSLT is currently the only way to make feeds into something that can still be viewed.
I think RSS/Atom are key technologies for the open web, and discovery is extremely important. Cancelling XSLT is going in the wrong direction (IMHO).
I've done a bunch of things to try to get people to use XSLT in their feeds: https://www.rss.style/
You can see it in action on an RSS feed here (served as real XML, not HTML: do view/source): https://www.fileformat.info/news/rss.xml
> One extremely important...
Not to downplay what you think is important, but I think it's pretty important that governments and public bodies use XSLT.
https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih....
https://www.govinfo.gov/content/pkg/BILLS-119hr400ih/xml/BIL...
https://www.weather.gov/xml/current_obs/KABE.xml
https://www.europarl.europa.eu/politicalparties/index_en.xml
https://apps.tga.gov.au/downloads/sequence-description.xml
https://cwfis.cfs.nrcan.gc.ca/downloads/fwi_obs/WeatherStati...
https://converters.eionet.europa.eu/xmlfile/EPRTR_MethodType...
They don't put ads on their sites, so I'm not surprised Google doesn't give a fuck about them...
> “They don't put ads on their sites, so I'm not surprised…”
Similarly, Chrome regularly breaks or outright drops support for web features used only in private enterprise networks. Think NTLM or Kerberos authentication, private CA revocation list checking, that kind of thing.
Again, nobody uses Google Ads on internal apps!
Many governments and public bodies used Flash, ActiveX and Java applets, but I'm certainly glad we got rid of those.
Replaced them with App stores, why one code base when you can have N code bases: web sites, ios, android , tv …
cheaper, privacy-oriented and more secure lol obviously not, doesn’t help the consumer or the developer.
Xslt is brilliant at transforming raw data, a tree or table for example, without having to install Office apps or paying a number of providers to simply view it without massive disruption loops.
We do the same with our feeds at Standard Ebooks: https://standardebooks.org/feeds/rss/new-releases
The page is XML but styled with XSLT.
FWIW the original post explicitly mentioned this use case and offered two ways to workaround.
Gotta love the reference to the <link> header element. There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
> There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
This is actually a feature of Orion[0], and among the reasons why I believe it to be one of the most (power) user-oriented browsers in active development.
It's such a basic thing that there's really no good reason to remove the feature outright (as mainstream browsers have), especially when the cited reason is to "reduce clutter" which has been added back tenfold with garbage like chatbots and shopping assistants.
[0]: https://kagi.com/orion/
Man, reaching way back in history here, but this reminds me of why I stopped contributing to Mozilla decades ago. My contribution was the link toolbar, that was supposed to give a UI representation of the canonical link elements like next and prev and whatnot. At the last minute before a major release some jerkhole of a product manager at AOL cut my feature from the release. It's incredible the way such pretty bureaucrats have shaped web browsers over the years.
Good user-facing software tends to have a coherent vision, and that involves getting features cut that people put a lot of time and effort into; even though those features have value, it's possible they don't have value in the product under development.
I don't really have enough context to say whether that was the case here. Mostly I'm raising the comment to note that this is an issue in commercial software too, but the sting is immediately moderated by "At least you got paid." It's a lot easier to see one's work fail to be reflected in the finished product when you can dry your tears with the bills in your money-pile (and I don't know how open source competes in things as cut-throat and taste-opinionated as UI when that continues to be true without solving the problem by injecting money into the process, which carries its own risks).
It's a browser. its job is render HTML
if a fix renders HTML better and more complete it should be done. it is always possible to make it configurable or change it later.
instead mozilla is pushing anti-web llm integrations.
iIRC, all of the proposed workarounds involved updating the sites using XSLT, which may not always be particularly easy, or even something publishers will realize they need to do.
Here's a 3rd option :)
For RSS/Atom feeds presented as links in a site (for convenience to users), developers can always offer a simple preview for the feed output using: https://feedreader.xyz/
Just URL-encode the feed like so: https://feedreader.xyz/?url=https%3A%2F%2Fwww.theverge.com%2...
...and you get a nice preview that's human readable.
Another use case I discovered and implemented many years ago was styling a sitemap.xml for improved UX / aesthetics.
> not that many feeds are actually doing this
Isn't this kind of an argument for dropping it? Yeah it would be great if it was in use but even the people who are clicking and providing RSS feeds don't seem to care that much.
You are probably right, but it is depressing how techies don't see the big picture & don't want to provide an on-ramp to the RSS/Atom world for newcomers.
Google is widely faulted with effectively killing RSS by pulling the plug on Reader (I, for example, haven’t used RSS since), so I don’t think they’re missing the big picture, I think they just prefer a different picture
It's probably worth considering that if the technology could be killed by one company pulling its chips off the board, perhaps the technology wasn't standing on its own.
We still use RSS and Atom feeds for podcasts. It's a pretty widely-adopted use case. Perhaps there is a lot more to the contraction of RSS as a way for discovering publishing of "blog"-style media than "Reader got killed" (it seems like Reader offered more features than just RSS consolidation that someone could, hypothetically, build... But nobody has yet?).
I never got the backslash with Reader, having always used native apps to handle RSS.
Native apps are always better, but having a web page syncing your feeds made it easier to access them, eg from the library or work computer. Not to mention nothing to install (or update) reduces friction. I didn’t have to stop using RSS, but the newly exposed hurdles were enough discouragement that I did stop
How does displaying XML using a client-side transform provide a better on-ramp compared to displaying XML using a server-side transform?
1. Everyone who uses a static site generator can add XSLT
2. Everyone who doesn't use a static site generate only has to add the XSLT file and add a single line to the XML. No need to write any code: new code is not a big deal for many HN readers, but not every blog author is a coder.
I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
except it has no newline, which I added for HN.You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
You've been in the RSS world since the beginning and never seen a stylized feed?
I've not been in the RSS world very much. I don't use news readers. And even I have seen a stylized RSS in the wild.
Our individual experiences are of course anecdotal, I'm just surprised at how different they are given your background.
> nor have I seen one.
Once upon a time, nice in-browser rendering of RSS/Atom feeds complete with search and sorting was a headliner feature of Safari.
https://www.askdavetaylor.com/how_do_i_subscribe_to_rss_feed...
Another point: it is shocking how many feeds have errors in them. I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
Even RSS wizards would benefit from looking at a human-readable version instead of raw XML.
I ended up writing a feed analyzer that you can try on your feed: https://www.rss.style/feed-analyzer.html
> I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
I’m sceptical about your analysis, because your tool makes spurious complaints about my feed <https://chrismorgan.info/feed.xml> which show that it’s not parsing XML correctly. For stupid reasons¹ that I decided not to fix or work around, many of the slashes are encoded as /, which is perfectly valid, but your tool fails to decode the character references inside attribute values. I don’t know what dodgy parser you’re using, it’s possible this is the only thing it gets wrong about parsing XML², but it doesn’t instil confidence. I would expect a strict XML parser to be more reliable. I’ve literally only once encountered a feed that was invalid XML³. Liberal parsing is not a virtue, it’s fragile in a different way. Postel was wrong.
—⁂—
¹ I wish OWASP’s XSS protection cheat sheet had never been written. I will say no more.
² Honestly, parsing XML isn’t very hard; once you’re past the prologue, there are literally only about seven simple concepts to deal with (element, attribute, text, processing instructions, comments, cdata, character/entity references), with straightforward interactions. Not decoding references in attribute values is a mind-boggling oversight to me.
³ WordPress thinks it’s okay to encode U+0003 as  in an XML 1.0 document.
I think it used to be more popular in early days. At one point i think firefox was styling rss feeds by default so people stopped using xslt as much.
You can still style them with css if you want. I dont really see the point. RSS is for machines to read not humans.
there's a fairly good chance that you simply haven't noticed, because it was working as intended, e.g. https://standardebooks.org/feeds/rss/new-releases
from: https://news.ycombinator.com/item?id=45824952
That's my point: you know all about RSS & feeds and don't need it. But what about someone who hasn't been using them since the beginning?
I think every page with an RSS feed should have a link to the feed in the html body. And it should be friendly to people who are not RSS wizards.
The phrase you are looking for to describe this discourse is “concern trolling.”
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
Maybe it's more for people who have no idea what RSS is and click on the intriguing icon. If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
> If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
Why? Wouldn't just see a different view of the same website that had that intriguing icon and go "ok, so what?"
If they don't know what an RSS feed is, seeing a stylized version isn't really going to help them understand, imho.
You can add text to the document via XSL which can be used to explain what you're looking at and how to use it.
See: https://developer.mozilla.org/en-US/docs/Web/XML/XSLT/Refere...
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
So that excludes you from the "someone who hasn't seen/used a RSS reader" demographic mentioned in the comment you are replying to.
I never realized styling RSS feeds was an options. Now looking at some of the examples, I wonder how many times I've clicked on "Feed", then rolled my eyes and closed it because I thought it wasn't RSS. More than zero, I'm sure.
It's the right direction if you're google. This is why Google should not be allowed to control the web. Support firefox, dump google.
> Cancelling XSLT is going in the wrong direction (IMHO).
XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away. The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
It seems like something an extension ought to be capable of, and if not, fix the extension API so it can. In firefox I think it would be a full-blown plugin, which is a lower-level thing than an extension, but I don't know whether Chromium even has a concept of such a thing.
> XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away.
Not having it available from the browser really reduces the ability to use it in many cases, and lots of the nonbrowser XSLT ecosystem relies on the same insecure, unmaintained implementation. There is at least one major alternative (Saxon), and if browser support was switching backing implementation rather than just ending support, “XSLT isn’t going anywhere” would be a more natural conclusion, but that’s not, for whatever reason, the case.
Your argument here includes that browsers should retain native XSLT implementations because non-browsers have bad XSLT implementations?
I don’t see anything that looks remotely like a normative argument about what browsers should or should not do anywhere in my post that you are responding to, did you perhaps mean to respond to some other post?
My point was that the decision to remove XSLT support from browsers rather than replacing the insecure, unmaintained implementation with a secure, maintained implementation is an indicator opposed to the claim "XSLT isn’t going anywhere”. I am not arguing anything at all about what browser vendors should do.
Is the idea that if they did so, the insecure non-browser XSLT-users could adopt their implementation?
The idea is that if they did so, the people using software running in the browser could continue to use XSLT with just the browser platform because the functionality would still be there with a different backend implementation, but instead that in-browser XSLT functionality is going somewhere, specifically, away.
Right but either way, the vulnerability exists today, and you're saying that whether or not the browser platform supports the functionality that harbors the vulnerabilities, the browser platform should be responsible for resolving those vulnerabilities. That's how I read it.
> and you're saying that whether or not the browser platform supports the functionality that harbors the vulnerabilities, the browser platform should be responsible for resolving those vulnerabilities.
No, I'm not (and I keep saying this explicitly) saying that browsers should or should not do anything, or be responsible for anything. I’m not making a normative argument, at all.
I am stating, descriptively, that browser vendors choosing to remove XSLT functionality rather than repairing it by using an alternative implementation is very directly contrary to the claim made upthread that “XSLT isn’t going anywhere”. It is being removed from the the most popular application platform in existence, with developers being required to bring their own implementation for what was previously functionality supported by the platform. I am not saying that this is good or bad or that anyone should or should not do anything differently or making any argument about where responsibility for anything related to this lies.
What do you find questionable about this being included as part of the broader argument?
I just don't understand it. I don't understand it well enough to call out what's questionable about it.
Or perhaps the multi-billion-dollar corporations could stop piggy-backing on volunteers and invest in maintaining the Web platform?
They did, the issue is that the improved Web platform they invested so much to build and maintain has no use for XSLT, which is obsolete in the modern world of good JavaScript, JSON and modern Fetch APIs.
Which popular browsers are significantly leaning on individual contributors or volunteers?
Google decided to drop XSLT, because the volunteer-maintained libxslt had no maintainers for some time. So, instead of helping the project, they just decided to remove a feature.
But by (attempting to) remove this dependency, Google indeed decided to stop piggy-backing on another volunteer - as requested.
Be careful what you wish for. :-)
Were you born before or after heartbleed uncovered the sorry state of OpenSSL and the complete absence of funding it was maintained under?
So to answer your question: Every single one of them, from Google with its billions, to Mozilla with Googles billions, none of them would spend even a cent on critical open source projects they relied on as long as they could get away with it.
Almost all of them? as I recall there was a single volunteer developer maintaining the xml/xslt libraries they were using.
Wasn't it similar with openssl 13+ years ago? Few volunteer maintainers, and only after a couple of major vulnerabilities money got thrown at that project?
I'm sure there's more and that's why the famous xkcd comic is always of relevance.
So... you want newbies to install an extension/plugin before they get a human-readable view of a feed???
That's about as new-user-hostile as I can imagine.
There are plenty of ways around this.
As others have pointed out, there are other options for styling XML that work well enough in practice. You can also do content negotiation on the server, so that a browser requesting an html document will get the human-readable version, while any feed reader will be sent the XML version. (If you render the html page with XSLT, you can even take advantage of better XSLT implementations where you don't need to work around bugs and cross-platform jank.) Or you can rely on `link` tags, letting users submit your homepage to their feed reader, and having the feed reader figure out where everything is.
There might even be a mime code for RSS feeds, such that if you open an RSS feed in your browser, it automatically figures out the correct application (i.e. your preferred RSS reader) to open that feed in. But I've not seen that actually implemented anywhere, which is a shame, because that seems like by far the best option for user experience.
> XSLT isn't going anywhere
XSLT as a feature is being removed from web browsers, which is pretty significant. Sure it can still be used in standalone tools and libraries, but having it in web browsers enabled a lot of functionality people have been relying on since the dawn of the web.
> hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away
So why not switch to a better maintained and more secure implementation? Firefox uses TransforMiix, which I haven't seen mentioned in any of Google's posts on the topic. I can't comment on whether it's an improvement, but it's certainly an option.
> The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
Really? How about a trillion dollar corporation steps up to sponsor the lone maintainer who has been doing a thankless job for decades? Or directly takes over maintenance?
They certainly have enough resources to maintain a core web library and fix all the security issues if they wanted to. The fact they're deciding to remove the feature instead is a sign that they simply don't.
And I don't buy the excuse that XSLT is a niche feature. Their HTML bastardization AMP probably has even less users, and they're happily maintaining that abomination.
> It seems like something an extension ought to be capable of
I seriously doubt an extension implemented with the restricted MV3 API could do everything XSLT was used for.
> and if not, fix the extension API so it can.
Who? Try proposing a new extension API to a platform controlled by mega-corporations, and see how that goes.
It's encouraging to see browsers actually deprecate APIs, when I think a lot of problems with the Web and Web security in particular is people start using new technologies too fast but don't stop using old ones fast enough.
That said, it's also pretty sad. I remember back in the 2000s writing purely XML websites with stylesheets for display, and XML+XSLT is more powerful, more rigorous, and arguably more performant now in the average case than JSON + React + vast amounts of random collated libraries which has become the Web "standard".
But I guess LLMs aren't great at generating XSLT, so it's unlikely to gain back that market in the near future. It was a good standard (though not without flaws), I hope the people who designed it are still proud of the influence it did have.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Yup, "been there, done that" - at the time I think we were creating reports in SQL Server 2000, hooked up behind IIS.
It feels this is being deprecated and removed because it's gone out of fashion, rather than because it's actually measurably worse than whatever's-in-fashion-today... (eg React/Node/<whatever>)
Yeah it was great. You could sort/filter/summarize in different ways all without a server round-trip. At the time it seemed magical to users.
100%. I’ve been neck deep over the past few months in developing a bunch of Windows applications, and it’s convinced me that never deprecating or removing anything in the name of backwards incompatibility is the wrong way. There’s a balance to be struck like anything, but leaving these things around means we continue to pay for them in perpetuity as new vulnerabilities are found or maintenance is required.
What about XML + CSS? CSS works the exact same on XML as it does on HTML. Actually, CSS works better on XML than HTML because namespace prefixes provide more specific selectors.
The reason CSS works on XML the same as HTML is because CSS is not styling tags. It is providing visual data properties to nodes in the DOM.
Agreed on API deprecation, the surface is so broad at this point that it's nearly impossible to build a browser from scratch. I've been doing webdev since 2009 and I'm still finding new APIs that I've never heard of before.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Awesome! I made a blog using XML+XSLT, back in high school. It was worth it just to see the flabbergasted look on my friends faces when I told them to view the source code of the page, and it was just XML with no visible HTML or CSS[0].
[0] https://www.w3schools.com/xml/simplexsl.xml - example XML+XSLT page from w3schools
Some people seem to think XSLT is used for the step from DOM -> Graphics. This is not the first time I have send a comment implying that, but it is wrong. XSLT is for the step from 'normalized data' -> DOM. And I like it, that this can be done in a declarative way.
This has been chewed on ad nauseum on HN already, to the point I won't even try to make a list of the articles but just link a search result: https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal...
KPIs of the hottest languages on the web show that XSLT has become hotter that Java and C++ in the last days. XML is back, dudes!
Best thread IMHO, lots of thoughtful root-level comments:
"Remove mentions of XSLT from the html spec" https://news.ycombinator.com/item?id=44952185
The "severe security issue" in libxml2 they mention is actually a non-issue and the code in question isn't even used by Chrome. I'm all for switching to memory-safe languages but badmouthing OSS projects is poor style.
It is also kinda a self-burn. Chromium an aging code base [1]. It is written in a memory unsafe language (C++), calls hundreds of outdated & vulnerable libraries [2] and has hundreds of high severity vulnerabilities [3].
People in glass houses shouldn't throw stones.
[1] https://github.com/chromium/chromium/commits/main/?after=c5a...
[2] https://github.com/chromium/chromium/blob/main/DEPS
[3] https://www.cvedetails.com/product/15031/Google-Chrome.html?...
Given Google's resources, I'm a little surprised they having created an LLM that would rewrite Chromium into Go/Rust and replace all the stale libraries.
Google is too cheap to fund or maintain the library they've built their browser with after its hobbyist maintainers got burnt out, for more than a decade so they're ripping out the feature.
Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.
Nobody is badmouthing open source. It's the core truth, open source libraries can become unmaintained for a variety of reasons, including the code base becoming a burden to maintain by anyone new.
And you know what? That's completely fine. Open source doesn't mean something lives forever
The issue in question is just one of the several long-unfixed vulnerabilities we know about, from a library that doesn't have that many hands or eyes on it to begin with.
And why doesn’t Google contribute to fixing and maintaining code they use?
Because they don't want to use the code. They begrudgingly use it to support XSLT and now they don't use it.
Maintaining web standards without breaking backwards compatibility is literally what they signed up for when they decided to make a browser. If they didn't want to do that job, they shouldn't have made one.
They "own the web". They steer its standards, and other browsers' development paths (if they want to remain relevant).
It is remarkable the anti-trust case went as it did.
According to whom?
Chromium is open source and free (both as in beer and speech). The license says they've made no future commitments and made no warrants.
Google signed up to give something away for free to people who want to use it. From the very first version, it wasn't perfectly compatible with other web browsers (which mostly did IE quirks things). If you don't want to use it, because it doesn't maintain enough backwards compatibility... Then don't.
The license would be relevant if I'd claimed that removing XSLT was illegal or opened them up to lawsuits, but I didn't. The obligation they took on is social/ethical, not legal. By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.
iIRC, lack of IE compatibility is fundamentally different, because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.
> By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.
Literally this. Microsoft used to ship a free web browser. Then they stopped. That's not something anybody can object to.
> because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.
Standards aren't holy books. It's actually more important to support real customer use cases than to follow standards.
But you know this. If standards are more important that real use cases, then the fact that XSLT has been removed from the html5 standard is enough justification to remove it from Chrome.
> Literally this. Microsoft used to ship a free web browser. Then they stopped. That's not something anybody can object to.
There is a fundamental difference between ceasing to make a browser and continuing to make a browser, while not meeting your expectations as a browser maker.
> If standards are more important that real use cases, then the fact that XSLT has been removed from the html5 standard is enough justification to remove it from Chrome.
Browsers very much have not depreciated support for non-HTML5 markup (e.g. the HTML4 era <center> tag still works). This is because upholding devs and users expectation that standards compliant websites that once worked will continue to work is important.
We object with our feet, by switching browsers.
What odds would you put dropping XSLT support at for triggering a user migration?
The license is the way it is not by choice. We should be clear about that and acknowledge KHTML, and both Safari and Chromium origins. Some parts remain LGPL to this day.
Because in this case it doesn't contribute to their ability to deliver ads.
If that was case they would switch to (rust XPath/XSLT) Xee.
Sounded like the maintainers of libxml2 have stepped-back, so there needs to be a supported replacement, because it is widely used. (Or if you are worried the reputation of "OSS", you can volunteer!)
Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?
To anyone who says to use JS instead of XSLT: I block JS because it is also used for ads, tracking and bloat in general. I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all).
I think being able to do client-side templating without JS is an important feature and I hope that since browser vendors are removing XSLT they will add some kind of client-side templating to replace it.
> I block JS
The percentage of visitors who block JS is extremely small. Many of those visits are actually bots and scrapers that don’t interpret JS. Of the real users who block JS, most of them will enable JS for any website they actually want to visit if it’s necessary.
What I’m trying to say is that making any product decision for the extremely small (but vocal) minority of users who block JS is not a good product choice. I’m sorry it doesn’t work for your use case, but having the entire browser ecosystem cater to JS-blocking legitimate users wouldn’t make any sense.
I block JS, too. And so does about 1-2% of all Web users. JavaScript should NOT be REQUIRED to view a website. It makes web browsing more insecure and less private, makes page load times slower, and wastes energy.
> And so does about 1-2% of all Web users.
To put that in context, about 6 percent of US homes have no internet access at all. The “I turn off JS” crowd is at least 3x smaller than the crowd with no access at all.
The JS ship sailed years ago. You can turn it off but a bunch of things simply will not work and no amount of insisting that it would not be required will change that.
I'm hearing you say, "don't waste your breath because change is not possible." And there you have your self-fulfilling prophecy.
To quote someone who lived before me: don't accept the things you cannot change. Change the things you cannot accept.
And the no-JS ship has not sailed. Government websites require accessibility, and at least in the UK, do not rely on JS.
Then you misheard me.
I’m not saying change is not possible. I’m saying the change you propose is misguided. I do not believe the entire world should abandon JS to accommodate your unusual preferences nor should everyone be obliged to build two versions of their site, one for the masses and one for those with JS turned off.
Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
> Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
Exactly. JS should be used to make apps. A blog is not an app. Your average blog should have 0 lines of JS. Every time I see a blog or a news article who's content doesn't load because I have JS disabled I strongly reconsider whether it's worth my time to read or not.
Did I say abandon? No. I said it should not be required. JavaScript should be supplementary to a page, but not necessary to view it. This was its original intent.
> JS is what has allowed websites to replace desktop apps in many cases.
Horribly at that, with poorer accessibility features, worse latency, abused visual style that doesn't match the host operating system, unusable during times of net outages, etc, etc.
> JavaScript should be supplementary to a page, but not necessary to view it.
I’m curious. Do Google Maps, YouTube, etc even work with JS off?
> This was its original intent.
Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
> Horribly at that
I disagree. You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in. I can load up a random web app and have high confidence that it can’t muck with my computer. I can’t do the same with random desktop apps.
> You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in.
is "every website now expects to run arbitrary code on the client's computer" really a more secure state of affairs? after high profile hardware vulnerabilities exploitable even from within sandboxed js?
from how many unique distributors did the average person run random untrusted apps that required sandboxing before and after this became the normal way to deliver a purely informational website and also basically everything started happening online?
People used to download way more questionable stuff and run it. Remember shareware? Remember Sourceforge? (Remember also how Sourceforge decided to basically inject malware that time?)
I used to help friends and family disinfect their PCs from all the malware they’d unintentionally installed.
> I’m curious. Do Google Maps, YouTube, etc even work with JS off?
I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
> Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
And that's why webshit is webshit.
> I can’t do the same with random desktop apps.
I can, and besides the point, why should anyone run random desktop apps? (Rhetorical question, they shouldn't.) I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
> I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
So no. Some major websites don’t actually work for you.
> And that's why webshit is webshit.
I don’t understand this statement. Webshit is webshit because the platform grew beyond basic html docs? At some point this just feels like hating on change. The web grew beyond static html just like Unix grew beyond terminals.
> I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
If this is the archetype of the person who turns off JS then I would bet the real percentage is way less than 1%.
I don't see how this makes the "JS availability should be the baseline" assumption any more legitimate. We make it possible to function in a society for those 6% of people. Low percentage still works out to a whole lot of people who shouldn't be left out.
I disagree. The world is under no obligation to cater to a tiny minority who self-select into reduced-functionality experiences.
It’s fine for you to turn off JS. It’s also fine for developers to require JS. Software has had minimum system requirements forever. I can’t run Android apps on my Palm Pilot from 2002 either and no one is obligated to make them work for me.
Without saying whether I think that's a good or bad thing, as a practical matter, I 100% agree. Approximately no major websites spend any effort whatsoever supporting non-JS browsers today. They probably put that in the class of text only browsers, or people who override all CSS: "sure, visitors can do that, but if they've altered their browser's behavior then what happens afterward is on them."
And frankly, from an economic POV, I can't blame them. Imagine a company who write a React-based website. (And again, I'm not weighing in on the goodness or badness of that.) Depending on how they implemented it, supporting a non-JS version may literally require a second, parallel version of the site. And for what, to cater to 1-2% of users? "Hey boss, can we triple our budget to serve two versions of the site, kept in lockstep and feature identical so that visitors don't scream at us, to pick up an extra 1% or 2% of users, who by definition are very finicky?" Yeah, that's not happening.
I've launched dozens of websites over the years, all of them using SSR (or HTML templates as we called them back in the day). I've personally never written a JavaScript-native website. I'm not saying the above because I built a career on writing JS or something. And despite that, I completely understand why devs might refuse to support non-JS browsers. It's a lot of extra work, it means they can't use the "modern" (React launched in 2013) tools they're use to, and all without any compelling financial benefit.
In addition to those things, JavaScripts can also cause some things to not work properly even though they would work without JavaScripts.
The point of the poster you're responding to is that sites are built JS-first for 98-99% of users, and it takes extra work to make them compatible with "JavaScript should NOT be REQUIRED to view a website", and no one is going to bother doing that work for 1-2% of users.
Yeah... or...... maybe they should just build websites the proper way the first time around, returning plain HTML, perhaps with some JS extras. Any user-entered input needs to be validated again on the backend anyway, so client-side JS is often a waste.
This falls apart the moment you need to add rows to a table or show and hide things in response to values selected in a dropdown. Even the lightest JS app centered around a big form is going to become a huge pain in the ass for literally no benefit. In a company of 100 people, that <0.5% of people who disable JS could literally be one guy, or no one at all.
You can use CSS for interactive-esque things like that. Use JS for all I care, just don't make it mandatory. You /could/ refresh the page with new values. You /could/ paginate your flow. You won't, because you'd rather spend 50 hours getting your JS to work right, than 5 hours writing some PHP.
Pity.
I _could_ also just write API endpoints and handle client-side interaction however I want. If your preferences are incompatible with mine, that's a tradeoff I'm choosing to make. I am doing the work, you see, and I can choose how I want to do it.
You ostensibly run some flavor of Linux. Do you also complain that macOS apps don't run on your machine? It seems to me like a similar argument: somebody has developed an application in some particular way, but your choices have resulted in that application not running on your machine. Your choices are not necessarily _wrong_, but they are of very little consequence to somebody who has developed an application with a particular environment/runtime in mind. Why should they have to make significant architectural changes to their application to support your non-standard choices?
I make Javascript mandatory to use my sites regardless of if it's necessary.
Blocking first party Javascript is a form of lunacy that is so illogical I can only shake my head. Let's say the site runs XSLT in Javascript. Now what? There's nothing that can be done and yet you would ask for further accommodation.
Here is why this is abusive: You can always restrict the subset of the web platform you demand to a subset that is arbitrarily difficult or even impossible to support. No matter how much accommodation is granted, it will be all for naught, because some guy out there goes even further with blocking things and starts blocking CSS. Next thing you know there's a guy who blocks HTML and you're expected to render out your website as a SVG with clickable links.
Of note here is that the segment we're talking about is actually an intersection of two very small cohorts; the first, as you note, are people who don't own a television errr disable Javascript, and the second is sites that actually rely on XSLT, of which there are vanishingly few.
XSLT is being exploited right now for security vulnerabilities, and there is no solution on the horizon.
The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
As they say, security is not a product, it’s a process. The process we have for existing browser technologies is better. That process is better because more people use it.
But even if we were to try to consider the technologies in isolation, and imagine a timeline where things were different? I doubt whether XML+XSLT is the superior platform for security. If it had won, we’d just have a different nightmare of intermingled content and processing. Maybe more stuff being done client-side. I expect that browser and OS manufacturers would be warping content to insert their own ads.
>You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
Are there examples of this?
> The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
Yes, they also have much more vulnerabilities, because browsers are JIT compiling JS to w+x memory pages. And JS continues to get more complex with time. This is just fundamentally not the case with XSLT.
We're comparing a few XSLT vulnerabilities to hundreds of JIT compiler exploits.
While JIT exploits represent a large share of vulnerabilities in JS engines, there are enough other classes of vulnerabilities that simply turning JIT off is not sufficient. (The same goes for simply turning JS off, the Web browser internal is complex enough even without JS.)
Turning off the JIT eliminates an entire class of vulnerabilities just by nature of how the JIT works.
Ironically, JIT JS is much more susceptible to buffer overflow exploits than even the C code that backs XSLT - because the C code doesn't use w+x memory pages!
Yeah, turning off the JS or Web eliminates an entire class of vulnerabilities just by nature of how the JS or Web works (running untrusted code or showing untrusted content in the local machine) as well. That's no surprise.
I feel like you're being purposefully obtuse.
The problem with JS isn't running untrusted code. That's easy and solved, we've been doing that for decades.
The problem with the JIT is compiling instructions, writing them to memory pages, and then executing them. This means your memory MUST be w+x.
This is really, really bad. If you have any way to write to memory unsafely, you can write arbitrary code and then execute it. Not arbitrary JS code. Arbitrary instructions. In the browsers process.
Even C and C++ does not have this type of vulnerability. At best, you can overwrite the return pointer with a buffer overflow and execute some code somewhere. But it's not 1995 anymore. I can't just write shell code in the buffer and then naively jump back into the buffer.
But with JIT JS, I can.
> I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all)
Recent XSLT parser exploits were literally the reason this whole push to remove it was started, so this change will specifically be helping people in your shoes.
So it's a parser implementation problem, not XSLT per se.
I feel like there's a bias here due to XSLT being negletted and hence not receiving the same powers as JS. If it did get more development in the browser I'm pretty sure it would get the same APIs that we hate JS for, and since it's already Turing complete chances are people will find ways to misuse it and bloat websites.
Makes me kind of sad. I started my carrier back in days when XHTML and co were lauded as the next thing. I worked with SOAP and WDSLs. I loved that one can express nearly everything in XML. And namespaces… Then came json and apart from being easier to read for humans I wondered why we switch from this one great exchange format to this half baked one. But maybe I’m just nostalgic. But every time I deal with json parsers for type serialization and the question how to express HashMaps and sets, how to provide type information etc etc I think back to XML and the way that everything was available on board. Looked ugly as hell though :)
json is sort of a gresham's law "bad money drives out the good" but for tech: lazy and forgiving technologies drive out the better but stricter ones.
bad technology seems to make life easier at the beginning, but that's why we now have sloppy websites that are an unorganized mess of different libraries, several MB in size without reason, and an absolute usability and accessibility nightmare.
xhtml and xml were better, also the idea separating syntax from presentation, but they were too intelligent for our own good.
> lazy and forgiving technologies drive out the better but stricter ones.
JSON is not "lazy and forgiving" (seriously, go try adding a comment to it).
It was just laser-focused on what the actual problem was that needed to be solved by many devs in day-to-day practice.
Meanwhile XML wanted to be an entire ecosystem, its own XML Cinematic Universe, where you had to adopt it all to really use it.
It's not surprising to me that JSON won out, but it's not because it's worse, it's actually much better than XML for the job it ended up being used for (a generic format to transfer state between running programs supporting common data structures with no extraneous add-ons or requirements).
XML is better for a few other things, but those things are far less commonly needed.
Don’t know if I would describe it as much better. I see it similar to the whole SQL -> NOSQL -> let’s add all the feature and end up with SQL. JSON undergo a similar story with the difference that we didn’t go back to XML. What I mean is to simplify and then realize what was actually missing. But I agree for the smaller services and state transfer especially in web XML was just too damn big and verbose. But conceptually it was great.
For "I need this Python dict to exist in this unrelated JavaScript program's context space" JSON is absolutely much better. If only because you completely sidestep all the various XML foibles, including its very real security foibles.
JSON is so good at this that, like CSV, it does displace better tech for that use case, but the better tech isn't usually XML but rather things like Avro or Protobuf.
For the most part people don't add on XML features to JSON. Comments are a frequent addition, sometimes schemas, but for the most part the attraction of JSON is avoiding features of XML, like external entity validation, namespaces, getting to choose between SAX or DOM styles, or being required to support unrelated XML systems just to use another.
Again, there are problem domains where those are helpful, and XML is a good fit for those. But those problem spaces end up being much smaller in scale than the ones solved by JSON, Avro, Iceberg, etc.
But the whole point to JSON is to be nearly as dumb simple as possible so that the complexity of the problem domain will necessarily be handled in a real programming language, not by the data magically trying to transform itself.
Just.. and for so long:
XSLT is WWW standard, JavaScript is not (it's ECMA standard) - and there is no JavaScript specification on W3C pages .
( https://www.w3.org/wiki/JavaScript )
Shall JavaScript to become a web standard first - then to be used to "replace" already standard solution ?
I like XSLT, and I’ve been using the browser-based APIs in my projects, but I must say that XSLT ecosystem has been in a sad state:
- Browsers have only supported XSLT 1.0, for decades, which is the stone age of templating. XSLT 3.0 is much nicer, but there’s no browser support for it.
- There are only two cross-platform libraries built for it: libxslt and Saxon. Saxon seriously lacks ergonomics to say the least.
One option for Google as a trillion dollar company would be to drive an initiative for “better XSLT” and write a Rust-based replacement for libxslt with maybe XSLT 3.0 support, but killing it is more on-brand I guess.
I also dislike the message “just use this [huge framework everyone uses]”. Browser-based template rendering without loading a framework into the page has been an invaluable boon. It will be missed.
If you are using XSLT to make your RSS or atom feeds readable in a browser should somebody click the link you may find this post by Jake Archibald useful: https://jakearchibald.com/2025/making-xml-human-readable-wit... - it provides a JavaScript-based alternative that I believe should work even after Chrome remove this feature.
I think that's sad. XSLT is in my point a view a very misunderstood technology. It gets hated on a lot. I wonder if this hate is by people who actually used and understood it, though. In any case, more often than not this is by people who in the same sentence endorse JavaScript (which, by any objective way of measuring is just a language far more poorly designed).
IMO XSLT was just too difficult for most webdevs. And IMO this created a political problem where the 'frontend' folks needed to be smarter than the 'backend' generating the XML in the first place.
XSLT might make sense as part of a processing pipeline. But putting it on front of your website was just an unnecessary and inflexible layer, so that's why everyone stopped doing it. (except rss feeds and etc.)
Iterating upon a JSON in raw JS is much sexier than learning XSLT. (at least, JS allows breakpoints in the Chrome debugger)
What does XSLT provide that you cannot achieve with plain JS?
I'm not much of a programmer, but XSLT being declarative means that I can knock out a decent-looking template without having to do a whole lot of programming work.
It can style RSS without enabling javascript and all that spy and malware.
Au contraire: the more you understand and use XSLT, the more you hate it. People who don't understand it and haven't used it don't have enough information and perspective to truly hate it properly. I and many other people don't hate XSLT out of misunderstanding at all: just the opposite.
XSLT is like programming with both hands tied behind your back, or pedaling a bicycle with only one leg. For any non-trivial task, you quickly hit a wall of complexity or impossibility, then the only way XSLT is useful is if you use Microsoft's non-standard XSLT extensions that let you call out to JavaScript, then you realize it's so easy and more powerful to simply do what you want directly in JavaScript there's absolutely no need for XSLT.
I understand XSLT just fine, but it is not the only templating language I understand, so I have something to compare it with. I hate XSLT and vastly prefer JavaScript because I've known and used both of them and other worse and better alternatives (like Zope Page Templates / TAL / METAL / TALES, TurboGears Kid and Genshi, OpenLaszlo, etc).
https://news.ycombinator.com/item?id=44396067
https://news.ycombinator.com/item?id=22264623
https://news.ycombinator.com/item?id=28878913
https://news.ycombinator.com/item?id=16227249
>My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
>Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Counterpoint: the more I used XSLT, the more I liked it, and the more I was frustrated that the featureset that ships in browsers is frozen in 1999
You should really try some of the modern alternatives. Don't let Angular and React's templating systems poison you, give Svelte a try!
Even just plain JavaScript is much better and more powerful and easier to use than XSLT. There are many JavaScript libraries to help you with templates. Is there even any such thing as an XSLT library?
Is there some reason you would prefer to use XSLT than JavaScript? You can much more easily get a job yourself or hire developers who know JavaScript. Can you say the same thing for XSLT, and would anyone in their right mind hire somebody who knows XSLT but refuses to use JavaScript?
XSLT is so clumsy and hard to modularize, only good for messy spaghetti monoliths, no good for components and libraries and modules and frameworks, or any form of abstraction.
And then there's debugging. Does a good XSLT debugger even exist? Can it hold a candle to all the off-the-shelf built-in JavaScript debuggers that every browser includes? How do you even debug and trace through your XSLT?
I think the fundamental disconnect here is that you're assuming that I am a developer. I'm not, I'm a lousy developer. It's not for lack of trying, programming just doesn't click for me in the way that makes learning it an enjoyable process.
XSLT is a good middle ground that gave me just enough rope to do some fun transformations and put up some pages on the internet without having to set up a dev environment or learn a 'real' programming language
As for the lack of libraries. Technically they are possible, but are not used that often. Maybe they are not that necessary. XML's idea is a federation of notations. Each notations is semantic, that is it makes only specific distinctions. XSLT transforms between two such notations. Since each combination is unique, there is little to reuse. Where other tools use libraries XSLT uses separate stylesheets and just rearranges them differently.
In Saxon you write templates that give you partial results and just call them with the -it:name switch. So if you have a four-step transform you can examine the results of each step in isolation. In libxslt that only supports 1.0 you can do this with parameters, although this is less convenient.
You can trace with xsl:message. Step-by-step debug is not there, granted; but it is definitely the last resort tool even when it is available.
It is always correct to tell someone they are wrong for liking something, and doing so is how we keep HN great.
Well said. I wrote an XSLT based application back in the early 2000s, and I always imagined the creators of XSLT as a bunch of slavering demented sadists. I hate XSLT with a passion and would take brainfuck over it any day.
Hearing the words Xalan, Xerces, FOP makes me break out in a cold sweat, 20 years later.
That's upsetting. Being able to do templating without using JavaScript was a really cool party trick.
I've used it in an unfinished website where all data was stored in a single XML file and all markup was stored in a single XSLT file. A CGI one-liner then made path info available to XSLT, and routing (multiple pages) was achieved by doing string tests inside of the XSLT template.
"Removing established open standards for a more walled garden" -> Fixed
To those who saw a chrome.com link and got triggered:
> The Firefox[^0] and WebKit[^1] projects have also indicated plans to remove XSLT from their browser engines.
[^0]: https://github.com/mozilla/standards-positions/issues/1287#i...
[^1]: https://github.com/whatwg/html/issues/11523#issuecomment-314...
In my opinion this is not “we agree lets remove it”. This is “we agree to explore the idea”.
Google and Freed using this as a go ahead because the Mozilla guy pasted a pollyfill. However it is very clearly NOT an endorsement to remove it, even though bad actors are stating so.
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support1. If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
Freed et al also explicitly chose to ignore user feedback for their own decision and not even try to improve XSLT security issues at the cost of performance.
Last I heard for WebKit removing it was the only outcome they saw.
Yeah all these billion dollar corporations that can’t be bothered see it as the only path forward not because of technological or practical issues, but because none of them can be asked to give a shit and plan it into their budgets.
They’re MBAs who only know how to destroy and consolidate as trained.
I get the frustration but I don’t believe that’s really accurate. It’s not widely used and modern developers don’t see it as valuable.
I’m a modern developer and I see it as valuable. Why side with the browser teams and ignoring user feedback?
If “modern developers” actually spent time with it, they’d find it valuable. Modern developers are idiots if their constant cry is “just write it in JS”.
No idea what’s inaccurate about this. A billion dollar company that has no problem pivoting otherwise, can’t fund open technology “because budgets” is simply a lie.
The dominant user feedback is the hard statistics on how rarely it's used.
You can't trim the space of "users" to just "people who already adopted the technology" in the context of the cost of browser support.
Yes excellent way to continue to diminish users of tech you don’t agree with.
“The people who actually use it are wrong and don’t matter!”
I'm not personally in the business of maintaining a browser.
But if I were, and I were looking to decrease cost of maintenance, "This entire rendering framework that supports a 0.02% use case" would be an outlier for chopping-block consideration. Not all corner-case features match that combination of cost-to-maintain and adoption (after, what, decades at this point?).
We wouldn't be arguing the point if the feature in question were fax machine support, right?
Yes we would because people still use fax machines.
I don’t understand this “everything must be a business metric because it can, therefor if I can whittle any feature as a small minority, I am forever correct and just in destroying said technology. Look at smart and savvy I am.”
Totally brain dead.
Are browser manufacturers required to support features forever no matter how low the usage is? Do you still mourn for the loss of Gopher support?
Yes I mourn the loss of Gopher, RSS, XML, etc.
Browser developers don’t have to do shit but it’s against the idea of an open web to kill off technology, especially one that’s A PART OF THE HTML STANDARD.
You love a closed web. That’s why you’re backing Google and arguing for this. I can’t change that there are so many weak minded “yes daddy Google” people in the world.
All I can do is advocate we support many technologies and ideas. The world you are advocating sounds locked down and uninteresting.
The proposal is to remove support and change the standard. Standards evolve. Sometimes features are removed because they are costly to maintain or security problems or both.
The fact is that all the major browsers are looking to deprecate this functionality because they all agree it’s a security bug farm and too underutilized to justify fixing.
> You love a closed web. That’s why you’re backing Google and arguing for this. I can’t change that there are so many weak minded “yes daddy Google” people in the world.
Don’t do this. We can just disagree without resorting to strawman and ad hominem attacks. No one insulted you for holding your opinion.
Before this my mental model of you was an engineer who’s frustrated that he’s going to have to do work to deal with the deprecation. I can empathize with that even if I think you are wrong in believing that browsers should invest further in support of xslt. Now I realize you just lack empathy for other engineers who are also forced to make real world trade offs. The fact that you happen to use xslt in the browser does not make it important relative to all the other features browsers support.
> Sometimes features are removed because they are costly to maintain or security problems or both.
But the features and tech doesn’t have security problems. The library implementation does. This is exactly the kind of bad faith argument I’m talking about. Please just for one second try making an argument pro-XSLT and then try to compare the two mind sets about technology here.
There is no negative trade off by maintaining XSLT other than not being lazy developers. I have no empathy for people who hide behind billion dollar corporations and do their bidding. This is not some sort of critical situation, this is a Google engineer doing things because it’s easier, not “right” or “the difficult choice”.
> There is no negative trade off by maintaining XSLT other than not being lazy developers.
Only because it’s not your money or time being traded. Yes, if we pretend that engineering effort is free then there’s no reason Google couldn’t just rewrite this entire library in Rust or whatever. But if that were true you would just rewrite the library yourself and send the pull request to Chromium.
In the real world where engineering costs time and money, every decision is a trade off. Someone rewriting libxslt to be secure is someone who’s not implementing other features and who’s not fixing other bugs on the backlog.
Resources allocated to Chromium are finite and while sure, Google could hire 2 more engineers to do this, in reality those 2 new engineers could and would be assigned to higher priority work.
> this is a Google engineer doing things because it’s easier, not “right” or “the difficult choice”.
You keep blaming Google specifically. All of the major browsers are planning to drop this though. They all agree this is the right trade off.
Surprise, there’s already an effort to write xml related libraries in Rust: xrust library for one.
It doesn’t have to be this pearl gripping bitching and moaning about budgets and practicality. That’s just what Google wants you to believe.
No all of the major browsers weren’t planning to drop this. It literally only started happening because of Google. And Google is essentially forcing the hand. Again this is bad faith argument. I will not concede on my thoughts on this unnecessary destruction of XSLT in the browser while other technologies get a pass.
I don't see how it can be reasonably asserted that this is "destruction of XSLT in the browser" when there are multiple XSLT conversion engines available in JavaScript.
Because when you convert xslt to javascript it’s not xslt is it? It’s javascript pretending to xslt. Furthermore if you restrict rendering xslt to javascript then you lose all the performance benefits of xslt at the engine level.
Also if I’m going to be using Javascript why would I then reach for xslt instead of the other 8000 templating libraries.
All your “its fine to remove it” arguments only work if you ignore all the reasons it’s not fine to remove it. That’s awfully convenient.
You'd be replacing the native-implemented XSLT engine in the browser with a JavScript-implemented XSLT engine living in the JavaScript sandbox, not replacing XSLT with JavaScript.
There's a theorem with Turing's name on it that clarifies why that's equivalent.
> if you restrict rendering xslt to javascript then you lose all the performance benefits of xslt at the engine level.
Nowadays, I'd have to benchmark before just assuming the native implementation is faster. Especially if one of the issues is libxslt is under-maintained. JS is quite fast these days (and the polyfill investigated in the OP is wasm, not even JS). This problem can also be solved by moving the rendering server-side, in which case your ceiling for speed is much higher (and you're not spending your client's CPU on finishing the rendering of your page; added bonus in this era of mobile, battery-constrained devices).
> Also if I’m going to be using Javascript why would I then reach for xslt instead of the other 8000 templating libraries.
Great point. Why do we have this one arbitrary meta-language for declarative transformation of one datatype that's "blessed" with a native implementation? That's an odd corner-case, isn't it. We should probably simplify by putting it on-par with other solutions for that problem.
Google isn’t forcing anyone’s hand here. They are removing functionality. Everyone else could just not do that and maintain compatibility if they believed it was valuable to do so.
I don’t know why you have a chip on your shoulder for Google but sure. Yes, Google is clearly doing this purely because they are evil and removing this little-used tech is the key to cementing their stranglehold on the internet. Yes, Google is strong-arming poor Apple and Mozilla into this. Yes, everyone who disagrees with you is both a complete moron and a “daddy Google” fanboy.
Better now?
There's a lot of people saying "All we need to do is maintain libxslt" and a distinct lack of people actually stepping up to maintain libxslt.
I, for one, won't. Not for the purpose of keeping it in the browser. There are just too many other ways to do what it does for me to want that (and that's before we get into conversations about whether I want to be working with XML in the first place on the input side).
> not being lazy developers
Laziness is one of the three great virtues of programming. Every second not spent maintaining libxslt is a second we can spend on something more people care about.
It's a rule for writers, but it applies to software architecture also: kill your darlings.
Sure. But since this announcement I’ve been planning ways to support xslt. Here are the projects I’m considering:
- iced-ui browser with xslt + servo
- contribute to xslt xrust project
- investigate sandboxing xslt in firefox and implementing xrust
- same as the previous but for Chrome
- have an llm generate 1000s of static websites built in xslt and deploy them across the internet
- promote and contribute to the recent Show HN post of an xslt framework
I figure if I can generate enough effort to spam the web with xslt sites then they can’t remove it. Ultimately the first goal is to delay the removal from Chrome. This will delay the removal in other browsers. I don’t care if it’s ineffective. It’s better than doing nothing.
So you can take your lazy tenants of software engineering and your “hyuck Google knows best” eslewhere.
These are all great ideas and I support you pursuing what you are passionate about.
No you don’t because my passions are for a open technically varied unbroken web and you’ve spent the entire time trying to paint me as crazy for wanting any of that just because not all technology is equally popular or profitable for Google.
With respect: I've never called you crazy, nor did I imply it, nor did I mean to imply it.
I think your cost / benefit analysis on maintaining a native in-browser implementation of an old, niche declarative transformation language for a hard-to-read data format that hasn't been the dominant model of sending data to browser clients for at least fifteen years is flawed, but it's not crazy. Reasonable people can certainly put their priorities in different places, and I respect our priorities don't align on this topic and you have a right to your opinion.
Practically speaking, what does "open web" vs. "Closed web" mean in this context?
Is "open web" just "maximally complies with the w3c standard?"
Open web means popular browsers supporting a wide range of technologies that institutions, businesses, and people use.
Not: popular browsers needle through technologies and tell everyone they know best
Does that make sense? Openess on the web isn’t a new term or concept so I’m not sure what’s confusing. It’s certainly not killing off technologies people are using.
What is open web to you? “Overpaid Mba at Google says this is best so you better fall in line.”
> Open web means popular browsers supporting a wide range of technologies that institutions, businesses, and people use.
The “wide range of technologies” is not what makes the web “open”.
The openness comes from the fact that anyone can write web sites and anyone can write a browser that can render the same websites that chrome can render. “More features” does not imply “more open”.
Dropping support for xslt would make the web less open if it were being replaced by some proprietary Google tech. But that’s not what’s happening here at all.
> Not: popular browsers needle through technologies and tell everyone they know best
How else would it possibly work? Everyone has to actively choose the features they will build and support.
I don’t care to continue this discussion primarily because you are making nearly the same points as two other commenters and it has become a three way exhausting conversation. You hate XSLT or something and love Google, congrats, you win this discussion. XSLT will be removed, Javascript will reign king. You will be happy. Every one will say Mason Freed is right and smart and that XML sucks because no one who matters uses it. I was never going to convince you to like or consider other technologies, anyways. And since that is true, this conversation doesn't do anything to try and help save XSLT and not worth continuing for me at least.
I wish it weren't the case but good luck and I'm sure we'll speak again with nearly the same conversation at the next thread for a standard deprecation that ad companies don't like.
Bold of you to assume other commenters in this thread have no experience with XML or XSLT.
I was there when it was the new hottest thing and I was there when it became last year's thing. These things come and go, and this one's time has come.
Given that the thing we want to support can be supported via server-side translation, client-side translation in JavaScript, or automatic detection and translation via a third-party plugin in JavaScript... What bearing on the open web does it have to preserve this functionality as a JavaScript-accessible API built into the browser?
I don't see how removing this API harms the open web. I do see how it imposes some nonzero cost on website maintainers to adapt to the removal of the API, and I respect that cost is nonzero.
... but I also watched OpenGL evolve into OpenGL ES and the end result was a far better API than we started with.
I don't think XSLTProcessor joining document.defaultCharset in the history books is damage to the open web.
ETA: And not that it matters over-much, but the "overpaid Mba" is an electrical engineering Ph.D who came to software via sensor manufacturing, so if you're going to attempt to insult someone's training, maybe get it right? Not that he'd be wrong if he were an Mba either, mind.
No the Mba is Freeds boss who installed him there and ensure he enforces closing of the web. Coming from sensor manufacturing to software isn’t really that impressive but it does make sense why a sensor manufacturing engineer would make arguments for removing a spec like XSLT but not a terribly complicated and security vulnerable spec like bluetooth. Which probably has 10x the complexity and 10x security plane of xslt. Thank you this whole thing is an even bigger joke.
Enjoy the precedent this sets for other tech not in the Google stable. You clearly are getting what you want so why continue this discussion. Who are you trying to convince?
> Which probably has 10x the complexity and 10x security plane
... and 10x the utility, since unlike XSLT Bluetooth requires sandboxed and mediated access to OS-level APIs that cannot be feature-compatible replicated with 3MB of JavaScript.
And sandboxing Xslt was one of the suggested ways to not break the web. But they ignored it.
The question for all the browser developers is not “can we feasibly support this feature” but “is it worth it to support this feature”?
Because they must address the security problems, there is no zero-cost solution to maintain compatibility. They either abandon it or rewrite it which comes with support costs forever.
I understand you believe they made the wrong choice and I understand why you feel that way. But according to their calculus they are making the right choice based on how widely used the feature actually is.
I believe you, but I think I missed that part of the conversation.
Running an XSLT engine in JavaScript is sandboxed. It's sandboxed by the JS rules. In terms of security, it's consolidating sandboxing concerns because risk of breaking XSLT becomes risk of breaking the JS engine, whereas right now there are two potential attack vectors to monitor.
(There is an unwritten assumption here: "But I can avoid the JS issues by turning off JavaScript." Which is true, but I think the ship is pretty well sailed for any w3c-compliant browser to be supporting JavaScript-off as a first-class use case these days. From a safety standpoint, we even have situations where there are security breach detections against things like iframe-busting that only work with JavaScript on).
Gopher support in browser was never, IIUC, a w3c standard.
Piecing the puzzle pieces together from multiple threads:
There's an argument to be made that the HTML standard, which post-dates the browser wars and was more-or-less the detente that was agreed upon by the major vendors of the day, includes a rule that no compliant browser should drop a feature (no matter how old or unused that feature is) because "Don't break the web." In other words: it doesn't matter if there's zero users of something in the spec; if it's in the spec and you claim to be compliant, you support it.
XSLT has been a W3C recommendation since 1999 and XSLT2 and 3 were added later, and no W3C process has declared it dead. But several browser engines are declaring it's too cumbersome to maintain so they're planning to drop it. This creates an odd situation because generally a detente has held standards in place: you don't drop something people use because users won't perceive the sites that use the tech as broken, they'll perceive your browser is broken and go use someone else's browser.
... except that so many vendors are choosing to drop it simultaneously, and the tech is so infrequently used, that there's a good chance this drop will de-facto kill XSLT client-side rendering as a technology web devs can rely upon regardless of what the spec says.
So people are concerned about a perceived shift in the practical "balance of power" between the standards and the browser developers that (reading between the lines) could usher in the bad old days of the Microsoft monopoly again, except that this time it's three or four browser vendors agreeing upon what the web should be and doing it without external input instead of Microsoft doing what it wants and Firefox fighting them / fighting to keep up. Consolidation of most of the the myriad browsers to only be running on three engines enables this.
> that there's a good chance this drop will de-facto kill XSLT client-side rendering as a technology web devs can rely upon regardless of what the spec says.
This is coming out of WHATWG so in actuality the spec itself is being updated to remove the functionality. So yes, the end state is very much that devs cannot rely on this functionality.
Do you have a link to the WHATWG discussion? I went looking for it yesterday and was unable to find details.
https://github.com/whatwg/html/issues/11523
From there they link to the minutes for the meeting where this was raised. Interestingly the Google engineer who raised this at the meeting was formerly at Mozilla for years. I don’t know if Mozilla was already looking to remove this or not.
“Everyone who uses the blink tag agrees it’s critical functionality.”
XSLT in the browser was left fundamentally underdeveloped, which is why it is not really widespread.
XSLT in non-browser contexts is absolutely valuable.
Agreed; as a technology, it's both clever and fun. I learned it right around the time I first touched functional programming in general and it was neat to see how you could build a chain of declarative rules to go from one document to another document.
Personally, I don't think we need a dedicated native-implemented browser engine for it. But in general I'm glad the tech exists.
So XPath locators won't be available in Playwright and Selenium in Chrome? This could be huge for QA and RPA.
They are still keeping the XPath APIs so XPath locators will still work.
Ah, so this is removing libxslt. For a minute I thought XSLT processing was provided by libxml2, and I remembered seeing that the Ladybird browser project just added a dependency on libxml2 in their latest progress update https://ladybird.org/newsletter/2025-10-31/.
I'm curious to see what happens going forward with these aging and under-resourced—yet critical—libraries.
As someone who built an XSLT renderer and remembers having an awful time with the spec: good riddance.
Data and its visualisation should be strictly separate, and not require an additional engine in your environment of choice.
https://www.igalia.com/chats/xslt-liam
XSLT shows up then to be very robust technology that survived the test of time already - if for decades (!) regardless of lack of support, investment, with key browser bugs not fixed by purpose stuck at version 1.0 - it's still being used in that working part - and if used it holds up well and last, in meantime, elsewhere: or Xee: A Modern XPath and XSLT Engine in Rust 381 points 8 months ago https://news.ycombinator.com/item?id=43502291 .And because it's a declarative way of transforming trees and collections of trees. And declarative means you don't say how to do it. You say, 'This is what I want'..
.. it's timeless: _abstracted definition_ to which imperative solutions could be reduced in the best case - with unaware of that authors repetitively trying (and soon having to) to reimplement that "not needed" ( - if abstracted already out ! - ) part ( ex. https://news.ycombinator.com/item?id=45183624 ) - in more or less common or compatible ways
- so, better keep it - as not everybody can afford expensive solutions and there are nonprofits too that don't depend on % of money wasted repeating same work and like to KISS !
I know it makes me an old and I am biased because one of the systems in my career I am most proud of I designed around XSLT transformations, but this is some real bullshit and a clear case why a private company should not be the de facto arbiter of web standards. Have a legacy system that depends on XSLT in the browser? Sucks to be you, one of our PMs decided the cost-benefit just wasn't there so we scrapped it. Take comfort in the fact our team's velocity bumped up for a few weeks.
And yes I am sour about the fact as an American I have to hope the EU does something about this because I know full-well it's not happening here in The Land of the Free.
I don't use XSLT and don't object to this, but seeing "security" cited made me realize how reflexively distrustful I've become of them using that justification for a given decision. Is this one actually about security? Who knows!
Didn't this come pretty directly after someone found some security vulns? I think the logic was, this is a huge chunk of code that is really complex which almost nobody uses outside of toy examples (and rss feeds). Sure, we fixed the issue just reported, but who knows what else is lurking here, it doesn't seem worth it.
As a general rule, simplifying and removing code is one of the best things you can do for security. Sure you have to balance that with doing useful things. The most secure computer is an unplugged computer but it wouldn't be a very useful one; security is about tradeoffs. There is a reason though that security is almost always cited - to some degree or another, deleting code is always good for security.
The vulnerabilities themselves often didn't really affect Chrome, but by the maintainers' own admission the code was never intended to be security critical. They got burned out after a series of vulnerability reports with publication deadlines made them decide to just take security bugs like normal bugs so the community could help fix things. That doesn't really fit with the "protect users by keeping security issues ecret for three months" approach corporations prefer. Eventually the maintainers stepped down.
Neither Google nor Apple were willing to sponsor a fork of the project but clearly they can't risk unmaintained dependencies in their billion dollar product, so they're rushing to pull the plug.
"Who knows what's lurking there" is a good argument to minimize attack surface, but Google has only been adding more attack surface over the past couple of years. I find it hard to defend that processing a structured document should be outside of a browser's feature set, but Javascript USB drivers and serial ports are necessary to drive the web. The same way libxml2 was never intended to be security critical, many GPU drivers were never written to protect from malicious programs, yet WebGPU and similar technology i being pushed hard and fast.
If we're deleting code to improvve against theoretical security risks, I know plenty of niche APIs that should probably be axed.
> the maintainers' own admission the code was never intended to be security critical
I find this hard to believe. XML is a portable data serialization format. The entire point of XML is to transfer data between separate parties who preumably dont trust each other. Most non browser xml usages are security criical and have been from the beginning.
Just look at their website from 2001 https://web.archive.org/web/20010202061700/http://xmlsoft.or... they specificly advertise processing remote documents from http & ftp as a usecase.
Second, i don't know what libxml has to do with anything. I know the libraries are related but browsers are not removing libxml they are removing libxslt.
> Neither Google nor Apple were willing to sponsor a fork of the project but clearly they can't risk unmaintained dependencies in their billion dollar product, so they're rushing to pull the plug.
Why would they sponser a fork? I wouldn't in their position. A fork solves the maitenance issue but not the other problems. The lack of maintainers exasperbate the problem, but it is not the problem.
There is this weird entitlement people have with google where people expect google to randomly provide open source projects with free labour even if its not inline with google's interests.
If google was demanding someone else step up to maintain, then saying they should do it would be reasonable. But they aren't doing that. After all a new maintainer did step up, but the deprecation is still happening as it wasnt solely about that.
> but Javascript USB drivers and serial ports are necessary to drive the web
I also think these are stupid features, but their risk profile is probably quite a bit lower than xslt. Not all code has the same risk factors. What sort of api is exposed, how its exposed, code quality, etc all matter.
Ultimately though its google's web browser. People have this weird entitlement with google where they expect google to make specific product decisions. Its open source software. The solution to google making bad product decisions is the right to fork. If the decisions are that bad, the fork will win. Having google design their software by a comittee, where a comittee is the entire internet,is not reasonable nor would it make a good product.
What are you talking about, "free labor"? They are using the damn library, Google would benefit directly.
But they dont want to use the library.
In that case, they should not call Chrome a web browser, it's AOL all over again.
> As a general rule, simplifying and removing code is one of the best things you can do for security.
Sure, but that’s not what they’re doing in the big picture. XSLT is a tiny drop in the bucket compared to all the surface area of the niche, non-standard APIs tacked onto Chromium. It’s classic EEE.
https://developer.chrome.com/docs/web-platform/
My understanding is that contrary to popular opinion it is firefox not chrome that originally pushed for the removal, so i dont know how relavent that is. It seems like all browser vendors are in agreement on xslt.
that said, xslt is a bit of a weird api in how it interacts with everything. Not all apis are equally risky and i suspect xslt is pretty high up there on the risk vs reward ratio.
There are security issues in the C implementation they currently use. They could remove this without breaking anything by incorporating the JS XSLT polyfill into the browser. But they won't because money.
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
— https://www.offensivecon.org/speakers/2025/ivan-fratric.html
— https://www.youtube.com/watch?v=U1kc7fcF5Ao
> libxslt -- unmaintained, with multiple unfixed vulnerabilities
— https://vuxml.freebsd.org/freebsd/b0a3466f-5efc-11f0-ae84-99...
It's true that there are security issues, but it's also true that they don't want to put any resources into making their XSLT implementation secure. There is strong unstated subtext that a huge motivation is that they simply want to rip this out of Chrome so they don't have to maintain it at all.
Especially when Google largely has the money to maintain the alleged unsecure library... of course it's an excuse to break the web once again.
XSLT is fantastic. You just feed it an XML file and it can change it into HTML, without any need for javascript.
> example.com needs to review the security of your connection before proceeding.
This text from Cloudflare challenge pages is just a flat-out lie.
Its "Security" when they want to do a thing, its "WebCompat" when they don't.
I'd never written any XSL before last week when I got the crazy idea of putting my resume in XML format and using stylesheets to produce both resume and CV forms, redacted and not, with references or not. And when I had it working in HTML, I started on the Typst stylesheet. xsltproc, being so old, basically renders the results instantly. And Typst, being so new, does as well.
Previous discussion https://news.ycombinator.com/item?id=44952185
- Chrome 155 (Nov 17, 2026): XSLT stops functioning on Stable releases, for all users other than Origin Trial and Enterprise Policy participants.**
- Chrome 164 (Aug 17, 2027): Origin Trial and Enterprise Policy stop functioning. XSLT is disabled for all users.**
Not the first time I've seen on Google's pages that the use of asterisks then lacks the corresponding footnotes.
When do we see the headline: Removing Javascript for a more secure browser?
So the honest title should have been: Removing XSLT because it cannot serve adds
(Yes, the underlying implementation might be insecure. But how secure would Javascript be with the same amount of maintenance in the last 20 years?)
I to this day think the move from dsssl to xslt was the biggest mistake in the SGML to XML evolution.
they went from a clean scheme based standard to a human-unreadable "use a GUI tool" syntax.
st text serna was a wysiwyg XML FO rendering editor: throw in call stylesheets and some input XML, wysiwyg edit away
but xslt didn't take off nor did derived products.
XSLT seem like it could be something implemented with WebAssembly (and/or JavaScript), in an extension (if the extension mechanism is made suitable; I think some changes might be helpful to support this and other things), possibly one that is included by default (and can be overridden by the user, like any other extension should be); if it is implemented in that way then it might avoid some of the security issues. (PDF could also be implemented in a similar way.)
(There are also reasons why it might be useful to allow the user to manually install native code extensions, but native code seems to be not helpful for this use, so to improve security it should not be used for this and most other extensions.)
The lead dev driving the Chrome deprecation built a wasm polyfill https://github.com/mfreed7/xslt_polyfill. Multiple people proposed in the Github discussions leading up to this that Google simply make the polyfill ship with Chrome as an on-by-default extension that could be disabled in settings, but he wouldn't consider it.
Good. Finally getting rid of this security and usage nightmare. There's a polyfill and an extensions so even the diehards will be pleased
https://github.com/whatwg/html/issues/11146#issuecomment-275...
.. just like that, but: https://github.com/whatwg/html/issues/11582#issuecomment-321...That's our freedom of not being forced to use JavaScript for everything being taken away !
It wasn't clear to me from reading this whether
with CSS will also stop being supported. There's no need to deprecate that, surely?Pour one out for @vgr-land https://news.ycombinator.com/item?id=45006098
Removing html and css would also make the browser more secure - but I would argue also very counter productive for users.
Most of read-only content and lite editing can be achieved with raw data + xslt.
The web has become a sledgehammer for clacking a nut.
For example, with xslt you could easily render a read only content without complex and expensive office apps. That is enough for Academia, Gov, and small businesses.
If they really cared about "security" they would remove JS or try to encourage minimising its use. That is a huge attack surface in comparison, but they obviously want to keep it so they can shove in more invasive and hostile user-tracking and controlling functionality.
I do all of my browsing with Javascript disabled. I've done this for decades now, as a security precaution mainly, but I've also enjoyed some welcome side-effects where paywalls disappeared and ads became static and unobtrusive. I wasn't looking for those benefits but I'll take 'em. In stride.
I've also witnessed a welcome (but slow) change in site implementations over the years: there are few sites completely broken by the absence of JS. Still some give blank screens and even braindead :hidden attributes thrown into the <noscript> main page to needlessly forbid access... but not as many as back in the day when JS first became the rage.
I don't know much about XSLT other than the fact that my Hiawatha web server uses it to make my directory listings prettier, and I don't have to add CSS or JS to get some style. I hate to see a useful standard abandoned by the big boys, but what can I do about it?
I bristle when I encounter pages with a few hundred words of content surrounded by literally megabytes of framework and flotsam, but that's the gig, right, wading through the crap to find the ponies.
It's a shame the browser developers are making an open, interoperable, semantic web more difficult. It's not surprising, though. Browsers started going downhill after they removed the status bar and the throbber and made scrollbars useless.
Destroying the open web instead of advocating to fix one of the better underutilized browser technologies for a more Profitable Google.
I will not forget the name Mason Freed, destroyer of open collaborative technology.
Didn't this effort start with Mozilla and not Google? I think you will in fact forget the name Mason Freed, just like most of us forgot about XSLT.
> Didn't this effort start with Mozilla and not Google?
Maybe round one of it like ten years ago did? From what I understand, it's a Google employee who opened the "Hey, I want to get rid of this and have no plans to provide a zero-effort-for-users replacement." Github Issue a few months back.
> It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.
— https://news.ycombinator.com/item?id=44953349
I don't see any evidence of that claim from the materials I have available to me. [0] is the Github Issue I mentioned. [1] is the WHATNOT meeting notes linked to from that GH Issue... though I have no idea who smaug is.
[0] <https://github.com/whatwg/html/issues/11523>
[1] <https://github.com/whatwg/html/issues/11146#issuecomment-275...>
Smaug is the Mozilla engineer they were talking about:
https://github.com/smaug----
Ah, that was my problem... I didn't put enough minuses after the nickname. ;)
You can remember their name forever now!
You appear to have forgotten how to scroll up and notice that the name of the OP is not my name. ;)
No, I know, I'm just being silly. Sorry, I didn't mean to attribute that sentiment to you.
Hilarious, thanks. Sorry you have issues with memory.
It started with Mozilla, Apple, and Opera jumping ship and forming WHATWG. That stopped new XML related technologies from being adopted in browsers twenty years ago. Google is just closing the casket and burying the body.
Why would I forget about XSLT a really good technology pushed to the wayside by bad faith actors? Why would I forget Mason Freed? A person dedicating themselves to ruining perfectly good technology that needs a little love.
Do you have some sort of exclusive short term memory or something where you can’t remember someone’s name? Bizarre reply. Other people may have had a similarly lazy idea, but Mason is the one pushing and leading the charge.
It seems maybe you want me to blame this on Google as a whole but that would mean bypassing blame and giving into their ridiculous bs.
https://www.youtube.com/watch?v=1NqLGp6qRuU
Ah you’re just a Google boot licker. Sorry I wasted my time responding, I thought you were serious.
I'm not a Google boot licker, I just hate XSLT, like (empirically) most developers.
Well maybe you should at least start licking some boots because your attitude towards a templating language is the billion dollar corpo way. You fit right in and could at least get some perks out shitting on technology you probably never fully understood and you get to be a better contributor to helping close off the web.
All of the disliked technologies and people who use them will be gone one day as we champion these destructive initiatives onward. Thank you hater!
> Destroying the open web instead of advocating to fix one of the better underutilized browser technologies for a more Profitable Google.
Google, Mozilla and Apple do not care if it doesn't make them money, unless you want to pay them billions to keep that feature?
> I will not forget the name Mason Freed, destroyer of open collaborative technology.
This is quite petty.
So is blatantly ignoring pushback against removing a feature like this. Eye for an eye.
Blame Apple and Mozilla, too, then. They all agreed to remove it.
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
This is only repeated by people who have never used it.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
I used to use XSLT a lot, though it was a while ago.
You can use Javascript to get the same effect and, indeed, write your transforms in much the same style as XSLT. Javascript has xpath (still). You have a choice of template language but JSX is common and convenient. A function for applying XSLT-style matching rules for an XSLT push style of transform is only a few lines of code.
Do you have a particular example where you think Javascript might be more verbose than XSLT?
Who is transforming XML documents on the web? Most people produce HTML to begin with, so XSLT is a solution seeking a problem. If you really insist, you could just use XSLT via server side rendering.
I actually do have to work with raw XML and XSLTs every once in a while for a java-based CMS and holy hell, it's nasty.
Java in general... Maven, trying to implement extremely simple things in Gradle (e.g. only execute a specific Thing as part of the pipeline when certain conditions are met) is an utter headache to do in the pom.xml because XML is not a programming language!
It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
I agree though, "XML is not a programming language" and attempts to use it that way have produced poor results. You should have seen the `ant` era! But this is broader than XML - look at pretty much every popular CI system for "YAML is not a programming language".
That doesn't mean that XML isn't useful. Just not as a programming language.
"Java is a big DSL to transform XML into stacktraces"
https://news.ycombinator.com/item?id=26663191
XSLT (or ANT) may be Turing complete, but it's firmly embedded in the Turing Tarpit.
https://en.wikipedia.org/wiki/Turing_tarpit
>"54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy." -Alan Perlis
But, that's what XSL is! XSL is a Turing-complete programming language in XML for processing XML documents. Being in XML is a big part of what makes XSL so awful to write.
XSL may be Turing-complete but it's not a programming language and wasn't intended to be one. It's a declarative way to transform XML. When used as such I never found it awful to write... it's certainly much easier than doing the equivalent in general purpose programming languages.
Maybe by analogy: There are type systems that are Turing complete. People sometimes abuse them to humorous effect to write whole programs (famously, C++ templates). That doesn't mean that type systems are bad.
XSL is a functional programming language, not a declarative language. When you xsl:apply-template, you're calling a function.
Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.
The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.
XSL's "escape hatch" is to allow arbitrary Turing-complete transformations. This was always intended to exist, to make easy transformations easy and hard transformations possible.
You basically never need to write Turing-complete code in a type system, but in any meaningful XSL project you will absolutely need to write Turing-complete XSL.
XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.
> It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
npm isn't even a build tool, it's a package manager and at that it's actually gotten quite decent - the fact that the JS ecosystem at large doesn't give a fuck about respecting semantic versioning or keeps reinventing the wheel or that NodeJS / JavaScript itself lacks a decent standard library aren't faults of npm ;)
Maven and Gradle in contrast are one-stop-shops, both build orchestrators and dependency managers. As for ant, oh hell yes I'm aware of that. The most horrid build system I encountered in my decade worth of tenure as "the guy who can figure out pretty much any nuclear submarine project (aka, only surfaces every few years after everyone working on it departed)" involved Gradle, which then orchestrated Maven and Ant, oh and the project was built on a Jenkins that was half DSL, half clicked together in the web UI, and the runner that executed the builds was a manually set up, "organically grown" server. That one was a holy damn mess to understand, unwind, clean up and migrate to Gitlab.
> look at pretty much every popular CI system for "YAML is not a programming language".
Oh yes... I only had the misfortune of having to code for Github Actions once in my life time, it's utter fucking madness compared to GitLab.
> Security? MUCH worse.
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
If you would RTFA, they're removing XSLT specifically for security reasons. They provide the following links:
https://www.offensivecon.org/speakers/2025/ivan-fratric.html
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
https://nvd.nist.gov/vuln/detail/CVE-2025-7425
https://nvd.nist.gov/vuln/detail/CVE-2022-22834
(And, for the record, XSL is Turing-complete. It has xsl:variable, xsl:if, xsl:for-each, and xsl:apply-template function calls.)
Are the security concerns not about libxslt, rather than XSLT?
They are about libxslt but Mason Freed doesn’t want you to know that. They could contribute a rust project which has already implemented XSLT 1.0 thus matching the browsers. But that would good software engineering and logical.
How is it worse than JS? It's a different thing...
> They all agreed to remove it.
All those people suck, too.
Were you counting on a different response?
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
You can continue to use XSLT server-side to emit HTML if you are deeply, deeply concerned about the technology.
Do you still beat your wife? <https://en.wikipedia.org/wiki/Do_you_still_beat_your_wife?>
I don't think that applies here (especially since I didn't even ask a question).
"I'm sad it's going away in the client!"
"So move it to the server, and the end-user will get essentially the same experience."
Am I missing something here?
> especially since I didn't even ask a question
Oh, that's the operative part? Accept my apologies. What I meant to say is, "I can see that you're deeply, deeply concerned about being able to continue beating your wife. I think you should reconsider your position on this matter."
No question mark, see? So I should be good now.
> Am I missing something here?
Probably not. People who engage in the sort of underhandedness linked to above generally don't do it without knowing that they're doing it. They're not missing anything. It's deliberate.
So, too, I would guess, is the case with you—particularly since your current reply is now employing another familiar underhanded rhetorical move. Familiar because I already called it out within the same comment section:
> The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
<https://news.ycombinator.com/item?id=45824392>
I seem to have personally offended you, and for that I am sorry.
This seems personal to you so I'll bow out of further discourse on the subject as it is not particularly personal to me. The websites I maintain use a framework to build RSS output, and the framework will be modified to do server-side translation or polyfill as needed to provide a proper HTML display experience for end-users who want that.
They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Only Google is pushing forward and twisting that message.
> They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Mozilla:
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support.
— https://github.com/mozilla/standards-positions/issues/1287#i...
WebKit:
> WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support, though if there's a known list of origins that participate in a reverse origin trial we could perhaps participate sooner.
— https://github.com/whatwg/html/issues/11523#issuecomment-314...
Describing either of those as “they preferred to keep it” is blatantly untrue.
So you’re choosing to help them spin the lie by cherry picking comments.
The Mozilla comment itself ends with:
> If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
> If it turns out not to be possible to remove the feature, we’d like to replace our current implementation. The main requirements would be compatibility with existing web content, addressing memory safety security issues, and not regressing performance on non-XSLT content. We’ve seen some interest in sandboxing libxslt, and if something with that shape satisfied our normal production requirements we would ship it.
But the only way it’s possible to remove the feature is if you ignore everyone asking you to please not to remove it.
Therefor by totally ignoring push back you can then twist Mozilla reps words to mean the only option is to remove it.
Similarly with the Webkit comment:
> WebKit is cautiously supportive.
Both these orgs requested investigation not removal. Both expressed some concern and caution. Google did not, they only ever pushed forward with removing it. Even goong so far as to ignore the followup request to implement XSLT 3.0.
No it’s not blatantly untrue. It’s unblatantly misleading.
Furthermore I’d say for those specific comments, “go ahead and remove it”, the inverse is blatantly untrue.
If somebody says “our position is A but if that’s not possible we should do B”, it means they prefer A. It doesn’t mean they prefer B, and telling people that they prefer B when you know otherwise is dishonest.
The comment isn’t “our position is A” the comment is “our position is A if B,C,D aren’t possible for compatibility reasons”. Aka “if we must remove it then fine, else we would like to improve it”.
Google then side stepped all the compatibility concerns and notions to improve it and arguments against removing it so they could only address A.
Go ahead and argue for all the word twisting these bad actors have done. You won’t change my mind this is an obvious attack on the open web by people subservient to the ad tech industry. Prove to me it’s not when all the browsers depend on a platform for ad money. M
They have installed people like Mason Freed into these positions, who are incapable of reason, to carry this objective forward.
This is the problem with any C/C++ codebase, using rust instead would have been a better solution than just removing web standards from what is supposed to be a web browser.
I wrote a bunch of stuff with XSLT back in the day that I thought was pretty cool but I can't for the life of me remember what it was...
> When that solution isn't wanted, the polyfill offers another path.
A solution is only a solution if it solves the problem.
This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.
The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.
Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?
> So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right?
As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
And that's the issue with XSLT: it won't.
> As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
This is a (poor) attempt at gaslighting/retconning.
The phrase "Don't break the Web" is not original to this thread.
(I can't say I look forward to your follow-up reply employing sleights of hand like claims about how stuff like Flash that was never standardized, or the withdrawal of experimental APIs that weren't both stable/finalized and implemented by all the major browsers, or the long tail of stuff on developer.mozilla.org that is marked "deprecated" (but nonetheless still manages to work) are evidence of your claim and that browser makers really do have a history of doing this sort of thing. This is in fact the first time something like this has actually happened—all because there are engineers working on browsers at Google (and Mozilla and Apple) that are either confused about how the Web differs from, say, Android and iOS, or resentful of their colleagues who get to work on vendor SDKs where the API surface area is routinely rev'd to remove whatever they've decided no longer aligns with their vision for their platform. That's not what the Web is, and those engineers can and should go work on Android and iOS instead of sabotaging the far more important project of attending to the only successful attempt at a vendor-neutral, ubiquitous, highly accessible, substrate for information access that no one owns and that doesn't fuck over the people who rely on it being stable.)
Mozilla's own site expounds on "Don't Break the Web" as "Web browser vendors should be able to implement new web technologies without causing a difference in rendering or functionality that would cause their users to think a website is broken and try another browser as a result."
There is no meaningful risk of that here. The percentage of web users who are trying to understand content via XSLT'd RSS is nearly zero, and for everyone who is, there is either polyfill or server-side rendering to correct the issue.
> and those engineers can and should go work on Android and iOS
With respect: taken to its logical conclusion, that would be how the web as a client-renderable system dies and is replaced by Android and iOS apps as the primary portal to interacting with HTTP servers.
If the machine is so complex that nobody wants to maintain it, it will go unmaintained and be left behind.
> Mozilla's own site expounds on "Don't Break the Web" as
I'm a former Mozillian. I don't give a shit how it has been retconned by whomever happened to be writing copy that day, if indeed they have—it isn't even worth bothering to check.
"Don't break the Web" means exactly what it sounds like it means. Anything else is a lie.
> If the machine is so complex that nobody wants to maintain it, it will go unmaintained
There isn't a shortage of people willing to work on Web browsers. What there is is a finite amount of resources (in the form of compensation to engineers), and a set of people holding engineering positions at browser companies who, by those engineers' own admission, do not satisfy the conditions of being both personally qualified and equipped to do the necessary work here. But instead of stepping down from their role and freeing up resources to be better allocated to those who are equipped and willing, they're keeping their feet planted for the simple selfish reason that doing otherwise might entail a salary reduction and/or diminished power and influence. So they sacrifice the Web.
Fuck them.
Sounds like you're volunteering to maintain the XSLT support in Firefox. That's great! That means when Chrome tries to decommission it, users who rely upon it will flock to Firefox instead because it's the only browser that works right with the websites they use.
Firefox ascendancy is overdue. Best of luck!
Do you still beat your wife?
Don't understand the question, but I do wish you well in your future endeavors!
It's a question about whether you still beat your wife. If the answer is yes, then your answer is yes, and if the answer is no, then your answer is no.
It sounds like you're looking for a jurisdiction where you can beat your wife.
I really don't follow how wife-beating has anything to do with browser protocols or standards adoption / modification, sorry.
At the very least it's (probably*) illegal whether you agree with it or not. Even where it's not illegal, physical violence against your partner isn't okay, even if they've angered you.
* depending on jurisdiction
This is all entirely off-topic from using XSLT in a website. As opposed to rendering XSLT client-side or server side, or rendering it using an in-engine native-implementation API or a JavaScript engine.
Happy to talk about those if you want; not interested at all in this violence digression you've decided to go on.
"The reality is that for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise.
It's even worse than that, actually, because all of the things we've built aren't just not doing what we want, they're holding developers back. People build their applications on frameworks that _abstract out_ all the APIs we build for browsers, and _even with those frameworks_ developers are hamstrung by weird limitations of the web."
- https://news.ycombinator.com/item?id=34612696#34622514
I find it so weird that browser devs can point to the existence of stuff like React and not feel embarrassed.
> I find it so weird that browser devs can point to the existence of stuff like React and not feel embarrassed.
Sorry, I don't follow. What's embarrassing about React?
I think in the context of that link, they see React as a failing of the web. If the W3C/WHATWG/browser vendors had done a reasonable job of moving web technology forward, things like React would be far less necessary. But they spent all their time and energy working on things like <aside> and web components instead of listening to web developers and building things that are actually useful in a web developer’s day-to-day life. Front-end frameworks like React, on the other hand, did a far better job of listening to developers and building what they needed.
One of the biggest flaws of the current web technology stack is the unholy mix of concerns with respect to layout. Layout is handled by CSS and HTML in a way that drives people crazy. I recently wanted to have a scroll bar for super long text inside a table cell. Easy, right? Turns out you need to change the HTML to include a <div> inside the table cell, since there is no way to style the table cell itself and have it do what you expect it to do.
XLST doesn't solve this at all, since you're just generating more of the problem, i.e. more HTML and CSS. It feels like there should have been some sort of language exclusively for layout definition that doesn't necessarily know about the existence of HTML and CSS beyond selector syntax.
Would it be possible to move it to an add-on for those who still want it? Are WebExtensions supporting third-party-libs?
TIL: Chrome supports XSLT.
Good riddance I guess - it and most of the tech from the "XML era" was needlessly overcomplicated.
XSLT is really powerful and it is declarative, like CSS, but can both push and pull.
It's a loss, if you ask me, to remove it from client-side, but it's one I worked through years ago.
It's still really useful on the server side for document transformation.
Imagine a WASM XSLT interpreter wouldn't be to hard to compile?
TFA mentions polyfills and libraries.
Your response is like seeing the cops going to the wrong house to kick in your neighbors door, breaking their ornaments in their entry way, and then saying to yourself, "Good. I hate yellow, and would never have any of that tacky shit in my house."
As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.
Yes but software, and especially browser, complexity has balooned enormously over the years. And while XSLT probably plays a tiny part in that, it's likely embedded in every Electron app that could do in 1MB what it takes 500 MB to do, makes it incrementally harder to build and maintain a competing browser, etc., etc. It's not zero cost.
I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.
If that makes me some kind of gestappo, so be it.
Point to the part of your comment that has any-fucking-thing to do with the topic at hand (i.e. engages with the actual substance of the comment that it's posted as a reply to). Your comment starts with "Yes but", as if it to present it as a rebuttal or rejoinder to something that was said, but then proceeds into total non-sequitur. It's an unrestrained attempt at a change of subject and makes for a not-very-hard-to-spot type of misdirection.
Your neighbors' ugly yellow tchotchkes have in no way forced you—nor will they ever force you—to ornament your house with XSLT printouts.
Alright, you're extremely rude and combative so I'll probably tap out here.
But consider if the "yellow tchotchkes" draw some power from my house, produce some stinky blue smoke that occasionally wafts over, requires a government-paid maintenance person to occasionally stop by and work on, that I partly pay for with my taxes.
That's my point.
> Alright, you're extremely rude
In contrast to poisoning the discussion with subtle conversational antimatter while wearing a veneer of amiability. My comments in this thread are not insidiously off-topic non-replies presented as somehow relevant in apposition to the ones that precede them.
> consider if the "yellow tchotchkes" draw some power from my house, produce some stinky blue smoke that occasionally wafts over, requires a government-paid maintenance person to occasionally stop by and work on, that I partly pay for with my taxes
Anyone making the "overwork the analogy" move automatically loses, always. Even ignoring that, nothing in this sentence even makes any sense wrt XSLT in the browser or the role of browser makers as stewards of stable Web standards. It's devoid of any cogent point and communicates no insight on the topic at hand.
Remove crappy JS APIs and other web-tech first before deprecating XSLT - which is a true-blue public standard. For folks who don't enable JS and XML data, XSLT is a life-saver.
If we're talking about removing things for security security, the ticking time bomb that is WebUSB seems top of the list to me of things that are dangerous, not actually standards (it is Chrome only), and yet a bunch of websites think it's a big, good reason to be Chrome-only.
But XSLT can be replicated with JavaScript and the reverse is, sadly, untrue.
So if only one needed to go, it seems obvious which it should be.
Perhaps, but isn't the contemporary tech stack orders of magnitude more complicated? Doesn't feel like a strong motivating argument.
Unquestionably the right move. From the various posts on HN about this, it's clear that (A) not many people use it (B) it increases security vulnerability surface area (C) the few people who do claim to use have nothing to back up the claim
The major downside to removing this seems to be that a lot of people LIKE it. But eh, you're welcome to fork Chromium or Firefox.
Chrome and other browsers could virtually completely mitigate the security issues by shipping the polyfil they're suggesting all sites depending on XSLT deploy in the browser. By doing so, their XSLT implementation would become no less secure than their javascript implementation (and fat chance they'll remove that). The fact that they've rejected doing so is a pretty clear indication that security is just an excuse, IMO.
I wish more people would see this. They know exactly how to sandbox it, they’re telling you how to, they’re even providing and recommending a browser extension to securely restore the functionality they’re removing!
The security argument can be valid motivation for doing something, but is utterly illegitimate as a reason for removing. They want to remove it because they abandoned it many years ago, and it’s a maintenance burden. Not a security burden, they’ve shown exactly how to fix that as part of preparing to remove it!
And it's a very small maintenance burden at that. Shipping the polyfil would technically still be a dependency, but about as decoupled a dependency as you can get. It's only interaction with the rest of the code would be through public APIs that browsers have to keep stable anyway.
by definition XSLT is more secure than JavaScript.
Yes and no. It's true that if you had to pick one to support and were only considering security, it would virtually certainly be better to go with XSLT. However, browser makers basically _have_ to support javascript, so as long as XSLT has a non-zero attack surface (which it does, at least as long as it's a native implementation), including it would be less secure. That said, as I pointed out, there are obvious ways to mitigate this issue and reduce the extra attack surface to effectively zero.
"[Y]ou're welcome to fork Chromium or Firefox" is the software developer equivalent of saying "you're welcome to go fuck yourself."
what exactly is the security concern with xslt?
It parses untrusted input, the library is basically unmaintained, it’s not often audited but anytime someone looks they find a CVE.
This is answered in the article.
XSLT the idea contains few (but not zero) unavoidable security flaws.
libxslt the library is a barely-maintained dumpster fire of bad practices.
They should audit LLMs.
More like google forgot how to write secure code and wings it, or just doesn't give a fuck.
I recently had an interesting chat with Liam Quin (who was on W3C's XML team) about XML and CDATA on Facebook, where he revealed some surprising history!
Liam Quinn in his award winning weirdest hat, also Microsoft's Matthew Fuchs' talk on achieving extensibility and reuse for XSLT 2.0 stylesheets, and Stephan Kesper̀̀'s simple proof that XSLT and XQuery are Turing complete using μ-recursive functions, and presentations about other cool stuff like Relax/NG:
https://www.cafeconleche.org/oldnews/news2004August5.html
Liam Quin's post:
https://www.facebook.com/liam.quin/posts/pfbid0X6jE58zjcEK5U...
#XML people!
How do we communicate the idea that declarative markup is a good idea? Declarative markup is where you identify what is there, not what it does. This is a title, not, make this big and bold. This is a a part number, not, make this blink when you click it - sure, you can do that to part numbers, but don't encode your aircraft manual that way.
But this idea is hard to grasp, for the same reason that WYSIAYG word processors (the A stands for All, What you see is all you get) took over from descriptive formatting in a lot of cases.
For an internal memo, for an insurance letter to a client, how much matters? Well, the insurance company has to be able to search the letters for specific information for 10, 20, 40, 100 years. What word processor did you use 40 years ago? Wordstar? Magic Wand? Ventura?
#markupMonday #declarativeMarkup
Don Hopkins: I Wanna Be <![CDATA[
https://donhopkins.medium.com/i-wanna-be-cdata-3406e14d4f21
Liam Quin: hahaha i actually opposed the inclusion of CDATA sections when we were designing XML (by taking bits we wanted from SGML), but they were already in use by the people writing the XML spec! But now you’ve given me a reason to want to keep them. The weird syntax is because SGML supported more keywords, not only CDATA, but they were a security fail.
Don Hopkins: There was a REASON for the <![SYNTAX[ ]]> ?!?!? I though it was just some kind of tribal artistic expressionism, like lexical performance art!
At TomTom we were using xulrunner for the cross platform content management tool TomTom Home, and XUL abused external entities for internationalizing user interface text. That was icky!
For all those years programming OpenLaszlo in XML with <![CDATA[ JavaScript code sections ]>, my fingers learned how to type that really fast, yet I never once wondered what the fuck ADATA or BDATA might be, and why not even DDATA or ZDATA? What other kinds of data are there anyway? It sounds kind of like quantum mechanics, where you just have to shrug and not question what the words mean, because it's just all arbitrarily weird.
Liam Quin: haha it’s been 30 years, but, there’s CDATA (character data), replaceable character data (RCDATA) in which `é` entity definitions are recognised but not `<`, IGNORE and INCLUDE, and the bizarre TEMP which wraps part of a document that might need to be removed later. After `<!` you could also have comments, <!-- .... --> for example (all the delimiters in SGML could be changed).
Don Hopkins: What is James Clark up to these days? I loved his work on Relax/NG, and that Dr. Dobb's interview "The Triumph of Simplicity".
https://web.archive.org/web/20020224025029/http://www.ddj.co...
Note: James Clark is arguably the single most important engineer in XML history:
- Lead developer of SGMLtools, expat, and Jade/DSSSL
- Co-editor of the XML 1.0 specification
- Designer of XSLT 1.0 and XPath 1.0
- Creator of Relax NG, one of the most elegant schema languages ever devised
He also wrote the reference XSLT implementation XT, used in early browsers and toolchains before libxslt dominated.
James Clark’s epic 2001 Doctor Dobb's Journal "A Triumph of Simplicity: James Clark on Markup Languages and XML" interview captures his minimalist design philosophy and his critique of standards and committee-driven complexity (which later infected XSLT 2.0).
It touches on separation of concerns, simplicity as survival, a standard isn't one implementation, balance of pragmatism and purity, human-scale simplicity, uniform data modeling, pluralism over universality, type systems and safety, committe pathology, and W3C -vs- ISO culture.
He explains why XML is designed the way it is, and reframes the XSLT argument: his own philosophy shows that when a transformation language stops being simple, it loses the very quality that made XML succeed.
lol, talking about "secure browser" on chrome dot com
[dead]
Nice find — interesting to see browsers moving to drop XSLT support. I used XSLT once for a tiny site and it felt like magic—templating without JavaScript was freeing. But maybe it’s just niche now, and browser vendors see more cost than payoff.
Curious: have any of you used XSLT in production lately?
Yes. It's used heavily in the publishing and standards industries that store the documents in JATS and other XML-based formats.
Because browsers only support XSLT 1.0 the transform to HTML is typically done server side to take advantage of XSLT 2.0 and 3.0 features.
It's also used by the US government:
1. https://www.govinfo.gov/bulkdata/BILLS
2. https://www.govinfo.gov/bulkdata/FR/resources
I lead a team that manage trade settlements for hedge funds; data is exported from our systems as XML and then transformed via XSLT into whatever format the prime brokers require.
All the transformed are maintained by non-developers, business analysts mainly. Because the language is so simple we don't need to give them much training, just get IntelliJ installed on their machine, show them a few samples and let them work away.
We couldn't have managed with anything else.
[dead]
Good, XSLT was crap. I wrote an RSS feed XSLT template. Worst dev experience ever. No one is/was using XSLT. Removing unused code is a win for browsers. Every anti bloat HNer should be cheering
The first few times you use it, XSLT is insane. But once something clicks, you figure out the kinds of things it’s good for.
I am not really a functional programming guy. But XSLT is a really cool application of functional programming for data munging, and I wouldn’t have believed it if I hadn’t used it enough for it to click.
Right. I didn't use it much on the client side so I am not feeling this particular loss so keenly.
But server side, many years ago I built an entire CMS with pretty arbitrary markup regions that a designer could declare (divs/TDs/spans with custom attributes basically) in XSLT (Sablotron!) with the Perl binding and a customised build of HTML Tidy, wrapped up in an Apache RewriteRule.
So designers could do their thing with dreamweaver or golive, pretty arbitrarily mark up an area that they wanted to be customisable, and my CMS would show edit markers in those locations that popped up a database-backed textarea in a popup.
What started off really simple ended up using Sablotron's URL schemes to allow a main HTML file to be a master template for sub-page templates, merge in some dynamic functionality etc.
And the thing would either work or it wouldn't (if the HTML couldn't be tidied, which was easy enough to catch).
The Perl around the outside changed very rarely; the XSLT stylesheet was fast and evolved quite a lot.
XSLT's matching rules allow a 'push' style of transform that's really neat. But you can actually do that with any programming language such as Javascript.
> Every anti bloat HNer should be cheering
Actually a transformation system can reduce bloat, as people don't have to write their own crappy JavaScript versions of it.
Being XML the syntax is a bit convoluted, but behind that is a good functional (in sense of functional programming language, not functioning) system which can be used for templating etc.
The XML made it a bit hard to get started and anti-XML-spirit reduced motivation to get into it, but once you know it, it beats most bloaty JavaScript stuff in that realm by a lot.
> No one is/was using XSLT.
Ah, when ignorance leads to arrogance; It is massively utilised by many large entreprise or state administration in some countries.
Eg if you're american the library of congress uses it to show all legislative text.
I'm always puzzled by statements like this. I'm not much of a programmer and I wrote a basic XSLT document to transform rss.xml into HTML in a couple of hours. I didn't find it very hard at all (anecdotes are not data, etc)
XSLT is complete and utter garbage. Good riddance.
Removing JavaScript for a more secure browser.
Although it's sad to see an interesting feature go, they're not wrong about security. It's more important to have a small attack surface if this was maintained by one guy in Nebraska and he doesn't maintain it any more.
No, XSLT isn't required for the open web. Everything you can do with XSLT, you can also do without XSLT. It's interesting technology, but not essential.
Yes, this breaks compatibility with all the 5 websites that use it.