by sthottingal on 2/13/20, 4:13 AM with 72 comments
by trynewideas on 2/13/20, 6:29 AM
Bringing Parsoid closer to core makes it easier for non-Wikimedia admins to use more of MW's modern core features. The performance gains are a nice bonus, and I suspect are related to the general improvements into PHP 7.2.
by mhd on 2/13/20, 9:37 AM
I'd really like to read that. The decision to have this parser as a completely separate component is the main reason why a lot of local MediaWiki installations completely avoided having a visual editor -- which in turn probably created lots of missing hours and/or missing documentation because WikiCreole ain't exactly a thing of beauty or something that's used in other places (as opposed to Markdown, which is an ugly beast, too, but at least the ugly beast you know).
You need a heavy JS frontend for a visual editor anyway, so why not do it client-side?
Having to deploy a separate component, probably in an environment that's not used at all is pretty much the worst choice possible. Yes, I'm aware, you readers here probably do all kinds of hip docker/keights setups where yet another node microservice ain't nothing special (and should've been rewritten in Rust, of course), but a wiki is something on a different level of ubiquity.
by cscottnet on 2/13/20, 6:26 PM
But to whet your appetite: we used https://github.com/cscott/js2php to generate a "crappy first draft" of the PHP code for our JS source. Not going for correctness, instead trying to match code style and syntax changes so that we could more easily review git diffs from the crappy first draft to the "working" version, and concentrate attention on the important bits, not the boring syntax-change-y parts.
The original legacy Mediawiki parser used a big pile of regexps and had all sorts of corner cases caused by the particular order in which the regexps were applied, etc.
Parsoid uses a PEG tokenizer, written with pegjs (we wrote a PHP backend to pegjs for this project). There are still a bunch of regexps scattered throughout the code, because they are still very useful for text processing and a valuable feature of both JavaScript and PHP as programming languages, but they are not the primary parsing mechanism. Translating the regexps was actually one of the more difficult parts, because there are some subtle differences between JS and PHP regexps.
We made a deliberate choice to switch from JS-style loose typing to strict typing in the PHP port. Whatever you may consider the long term merits are for maintainability, programming-in-the-large, etc, they were extremely useful for the porting project itself, since they caught a bunch of non-obvious problems where the types of things were slightly different in PHP and JS. JS used anonymous objects all over the place; we used PHP associative arrays for many of these places, but found it very worthwhile to take the time to create proper typed classes during the translation where possible; it really helped clarify the interfaces and, again, catch a lot of subtle impedance mismatches during the port.
We tried to narrow scope by not converting every loose interface or anonymous object to a type -- we actually converted as many things as possible to proper JS classes in the "pregame" before the port, but the important thing was to get the port done and complete as quickly as possible. We'll be continuing to tighten the type system -- as much for code documentation as anything else -- as we address code debt moving forward.
AMA, although I don't check hacker news frequently so I can't promise to reply.
by lolphp111 on 2/13/20, 7:07 PM
Now they rewrote in PHP, thats probably one of the worst languages out there, and why not rewrite in something compiled if speed was the main reason for a rewrite?
For me PHP sits in the middle as a poor language, and still slow compared to any compiled languages. Also i would want to see some wasm vs php benchmarks they did before starting with php.
Lots of poor decisions from the wiki team.
by tjpnz on 2/13/20, 9:59 AM
I found this a little frightening given Parsoid/JS is handling user input.
by znpy on 2/13/20, 9:27 AM
by TicklishTiger on 2/13/20, 8:25 AM
For some reason, I did not manage to find it. Neither linked from this article, nor via the MediaWiki page:
https://www.mediawiki.org/wiki/Parsoid
Nor via the Phabricator page:
https://phabricator.wikimedia.org/project/profile/487/
What am I missing?
by harryf on 2/13/20, 8:39 AM
A nice (unexpected) side effect is it became much easier for people extend the parser which their own syntax, leading to an explosion of plugins ( https://www.dokuwiki.org/plugins?plugintype=1#extension__tab... )
I'm no expert on parsing theory but I have the impression that applying standard approaches to parsing source code; building syntax trees, attempting to express it with context free grammar etc. is the wrong approach for parsing wiki markup because it's context-sensitive. There's some discussion of the problem here https://www.mediawiki.org/wiki/Markup_spec#Feasibility_study
Another challenge for wiki markup, from a usability perspective, if a user get's part of the syntax of a page "wrong", you need to show them the end result so they can fix the problem, rather than have the entire page "fail" with a syntax error.
From looking at many wiki parsers before re-writing the Dokuwiki parser, what _tends_ to be the case, when people try to apply context-free grammars or build syntax trees is they reach 80% then stumble at the remain 20% of edge cases of how wiki markup is actually used in the wild.
Instead of building an object graph, the Dokuwiki parser produces a simple flat array representing the source page ( https://www.dokuwiki.org/devel:parser#token_conversion ) which I'd argue makes is simpler write code for rendering output (hence lots of plugins) as well as being more robust at handling "bad" wiki markup it might encounter in the wild - less chance of some kind of infinite recursion or similar.
Ultimately it's similar discussion to the SAX vs. DOM discussions people used to have around XML parsing ( https://stackoverflow.com/questions/6828703/what-is-the-diff... ). From a glance at the Parsiod source they seem to be taking a DOM-like approach - I wish them luck with that - my experience was this will probably lead to a great deal more complexity, especially when it comes to edge cases.
by TicklishTiger on 2/13/20, 9:21 AM
Example: https://github.com/wikimedia/parsoid/blob/master/src/Parsoid...
This is how I would write the function definition:
function html2wikitext($config, $html, $options = [], $data = null)
This how Wikimedia did it: public function html2wikitext(
PageConfig $pageConfig, string $html, array $options = [],
?SelserData $selserData = null
): string
I see this "strictness over readability" on the rise in many places and I think it is a net negative.Not totally sure, but this seems to be the old JS function definition:
https://github.com/abbradar/parsoid/blob/master/lib/parse.js...
A bit cryptic and it suffers from the typical promise / async / callback / nesting horror so common in the Javascript world:
_html2wt = Promise.async(function *(obj, env, html, pb)
by echelon on 2/13/20, 6:21 AM
They claim,
> The two wikitext engines were different in terms of implementation language, fundamental architecture, and modeling of wikitext semantics (how they represented the "meaning" of wikitext). These differences impacted development of new features as well as the conversation around the evolution of wikitext and templating in our projects. While the differences in implementation language and architecture were the most obvious and talked-about issues, this last concern -- platform evolution -- is no less important, and has motivated the careful and deliberate way we have approached integration of the two engines.
Which is I suppose a compelling reason for a rewrite if you're understaffed.
I'd still be interested in writing it in Rust and then writing PHP bindings. There's even a possibility of running a WASM engine in the browser and skipping the roundtrip for evaluation.