YAPC::NA day two

If you haven't already, read about day one.

Postmodern Module Packaging

I began day two with "Postmodern Module Packaging" with Ingy döt Net. This was a great comparison of the current module packaging toolchains, and a good introduction to a philosophy Ingy calls "Acmeism." It emphasizes using the right tool for the job, and cross-pollination and sharing between toolsets. The CPAN is known as a (maybe the) major strength of Perl, so Ingy has tried to transplant the ideas that have made the CPAN such a success into other communities, as well as languages where there's no real community. He wrote a module of re-usable Bash script, including a Makefile, test suite, and embedded documentation (check it out - bash has no embedded documentation format), using concepts from the CPAN to distribute his work. It was fascinating to watch Ingy's particular brand of ingenuity at work, and I gained a good deal of appreciation for other module packaging toolchains. We in Perl are pretty lucky!

Perl 5.16 and beyond

My favourite talk of the entire conference was "Perl 5.16 and beyond." Jesse Vincent is the current Perl5 Pumpking, a sort of benevolent dictator of the moment (Larry is Perl's dictator for life). Perl is currently undergoing a revivial after a period of stagnation. This has caused some social problems, and there seems to be a good deal of confusion about whether we should be backwards compatible, or just backwards. (Hint: if you're seriously asking, you're already wrong.) Part of the problem is a lack of a shared vision (or any vision, really) for Perl5's future beyond appropriating more good ideas from Perl6. Jesse outlined the current state of Perl5, and his vision for the next version (5.16) and beyond. His plan would see a large-scale clean-up of the core of the language, including completing work to make the internals UTF-8-clean, and modularizing the interpreter's core. These would set the stage for removing some parts of perl from the "core core" - that is, removing them from the core perl interpreter, but shipping them alongside by default, to be loaded on the fly as needed. This might apply to socket functions, networking functions, or functions for working with binary data - these should be shipped as part of the default kit, but don't need to be in the core interpreter. Doing this would reduce startup costs while cleaning up the codebase, making way for further improvements down the road. Another significant change would be the semantics of "use" - currently "use v5.14" means "give me 5.14 or newer." Jesse would like to see something like "use v5.16" to mean "give me a perl that acts like 5.16." That's a big problem, and it's not immediately clear what the stumbling blocks will be. He'd also like to see longer deprecation cycles - which means that features have to be right the first time. Smart matching isn't so smart, as p5p is currently discussing. Well, the 27-way dispatch table might have been a good clue at the time the smart matching feature was added. Jesse claims he'll be asking for clearer use cases and such when requesting proposals for new features. In the far future, Perl might have warnings on by default, have autodie built-in and turned on by default, everything UTF8 by default... it'll be a great Perl once we get there! I was pleasently surprised that no tomatoes, real or imaginary, were thrown. The audience was very receptive to these suggestions, so I hope the same will be true when Jesse gives his talk again as OSCON.

One suggestion from the audience was that the regular expression engine should be modularized - well, it is already pluggable. I was under the impression that nobody had bothered to implement an alternative engine, but after looking I (of course, this is the Perl community we're talking about) was wrong. I had been reading about the Thompson Nondeterministic Finite Automaton implementation described in papers beginning here. Instead of backtracking, an NFA regex engine explores each branch of the tree in parallel. You probably haven't come across this problem, but Perl's normal regex engine can have astronomically terrible performance if given "pathological" input - an NFA regex engine has no inputs that are pathological. That makes it a very attractive thing - and lo and behold, there is RE2 and Perl bindings. The RE2 engine can't support backreferences, and some of the fancy but decidedly non-regular parts of Perl regular expressions, but for the basic stuff, the syntax is the same. I'm excited by this, and while it isn't perfect, it's a great start. I've filed some bugs against both the RE2 library and the Perl bindings, and I hope to use this in the future.

Marketing & advocacy

After lunch, I attended two sessions on marketing or advocating for Perl. Mark Prather have a 20-minute talk on his efforts marketing Perl. The talk was certainly interesting, but the next talk by chromatic (does anyone outside the IRS know his real name?) was meatier, and there was more time for discussion. One issue that came up is that of vendor support for Perl. Many operating system vendors ship a quite out-of-date perl, and sometimes ship broken installations of Perl (Red Hat and Apple are probably the worst). Some current versions of enterprise Linux ship with a version of Perl that hasn't been supported for 5 years. While they're technically still supported by the OS vendor, the reality is that they don't get regular updates, and this gives Perl a falsely bad image. The Perl community can't commit to supporting these old versions, so instead we discussed providing vendor-compatible packages which could either install over top (slightly dangerous), or alongside the system perl. While Perlbrew is great for developers, it isn't ideal for deployment. Currently, we're waiting for someone to step up to work with vendors to keep their packages more up-to-date, or publish a set of Perl-community-approved packages for a variety of systems. I'm told that if you know what you're doing, making a package isn't so hard. Luckily, I don't know what I'm doing!


Next was a session on Perl one-liners targeted for an audience that was less familiar with the practice than I was. One-liners are a great replacement or addition to your other unix one-liner tools like grep, sed, awk, sort, uniq, and so on. For example, a single line of Perl can help you find comments in C/C++ code:

perl -e '$in_comment = 0; while (<>) { $in_comment = 1 if m{\Q/*\E|//}; print if $in_comment; $in_comment = 0 if m{\Q*/\E|//}; }' *.cpp

Options for options

Next, Nick Patch presented some modules for processing command line arguments. The standard is Getopt::Long, which is shipped with Perl by default, but it has a large memory footprint, it doesn't offer advanced features like usage messages, and the interface can be cumbersome. Getopt::Lucid appears to be the frontrunner for replacing it directly, while App::Cmd makes writing command line programs with subcommands easier.

Lightning talks

The lightning talks on day two were excellent: Tatsuhiko Miyagawa's talk on Carton was excellent - Carton is a way to track module dependencies by version, which is currently not well supported by the CPAN toolchain. Carton provides an easy utility to track and install the correct versions of your dependencies. Another great presentation was "CPAN gems from the far East." Among the projects mentioned:

  • Furl, a "lightning-fast URL fetcher"
  • Text::Xslate, a very fast tempating engine for Perl (100+ times faster than Template::Toolkit, the current incumbent leader in this domain)
  • Server::Starter, a superdaemon for hot-deploying server programs - this is really cool, and I hope to start using it soon.
One more day of YAPC::NA 2011 left!
Comment from Gabor Szabo - July 13, 2011 at 5:49 am

Nice report. Thank you!

I wonder if the replacement regex engines are used anywhere?

ps. May I recommend you add links in the text of each report to the previous and the next report?

Pingback from YAPC::NA days zero and one » hashbang.ca - July 29, 2011 at 1:20 am

[...] YAPC::NA day two [...]