The below is an off-site archive of all tweets posted by @texttheater ever

July 18th, 2018

Wort des Tages: Begehungskeks

via Twitter for Android

tinysubversions Promising! I'll have to check if this includes their journals and other archives twitter.com/krenshar_posts…

via Twitter for Android (retweeted on 5:26 PM, Jul 18th, 2018 via Twitter Web Client)

tinysubversions Is there a way to like, pay monthly to get access to a ridiculous number of online journals and archives as though I were affiliated with a university? Right now my solution is "take a single class once a year and pay out of pocket for it"

via Twitter for Android (retweeted on 5:25 PM, Jul 18th, 2018 via Twitter Web Client)

perseveresther kombinatorisches Explosiönchen

via Twitter for Android (retweeted on 5:25 PM, Jul 18th, 2018 via Twitter Web Client)

Auxquelles, aber als belgisches Dorf

via Twitter for Android

SdeWijs Great piece by @berndulrich in @DIEZEIT - "The Magic of populists" arguing that West's political crisis had first hit Social democrats + is now tearing apart the conservatives. Perhaps the answer is in more @enmarchefr parties uniting the Social Democratic + Conservative Center? pic.twitter.com/uJjXY4kI1V

via Twitter Web Client (retweeted on 4:40 PM, Jul 18th, 2018 via Twitter for Android)

frachtschaden Ich wäre durchaus bereit, 5 Euro Rundfunkgebühr im Monat zu bezahlen.

via Twitter for iPhone (retweeted on 4:39 PM, Jul 18th, 2018 via Twitter for Android)

kathrinpassig @texttheater "The Bongo Bungler" von Janwillem van de Wetering hätte nicht "Bongo-Pfuscher" heißen sollen, sondern "Trommel-Trottel", wen interessiert denn da die Art des Perkussionsinstruments.

via Twitter for Android (retweeted on 4:38 PM, Jul 18th, 2018 via Twitter for Android)

ojahnn Motivation chapter of my thesis twitter.com/MobyDickatSea/…

via Twitter Web Client (retweeted on 4:09 PM, Jul 18th, 2018 via Twitter Web Client)

kathrinpassig Gerade ist mir die richtige und ganz einfache Lösung für ein Übersetzungsproblem eingefallen. Das Buch, in das ich die falsche geschrieben habe, ist vor 17 Jahren erschienen und seit 16 vergriffen.

via Twitter for Android (retweeted on 3:13 PM, Jul 18th, 2018 via Twitter Web Client)

acl2018 Algorithms like LSTM and RNN may work in practice. But do they work in theory?, asks M. Steedman pic.twitter.com/Ua8pKFDQwS

via Twitter for Android (retweeted on 12:44 PM, Jul 18th, 2018 via Twitter Web Client)

ojahnn Kennt ihr die Geschichte von Jim Knopf, der so oft Löcher in der Hose hat, dass Frau Waas ihm einen Knopf drannäht, damit er das Loch auf- und zuknöpfen kann, statt dass sie immer die Hosen flicken muss? Sowas brauche ich für meinen Badezimmerfußboden.

via Twitter Web Client (retweeted on 11:37 AM, Jul 18th, 2018 via Twitter for Android)

oaostrik More deets for your delectation pic.twitter.com/n2oPuzMBub

via Twitter for Android (retweeted on 11:19 AM, Jul 18th, 2018 via Twitter for Android)

msmollyebrown Today while giving high school students a tour of the Archives one asked what the microfilm reader was. He then realized "Oh, that's what people use in horror movies to find out the scary history of the town." WHAT. A. HOT. TAKE.

via Twitter Web Client (retweeted on 11:11 AM, Jul 18th, 2018 via Twitter Web Client)

drgriffis Running theme of : we need to start thinking about broader kinds of generalization, to testing situations that don’t mirror the training distribution. Sounds like we’ve got a good topic to think about for already!

via Twitter for iPad (retweeted on 11:11 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: One component missing from standard semantics is information structure—marked by intonation in English and other things in other languages. Have worked on modeling that with CCG (point of [you LIKE] example above).

via TweetDeck (retweeted on 10:47 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender @adveisner Steedman: I find generation completely terrifying because as you say you can get away with an incomplete semantics when doing parsing, but not in generation. And we don’t know how to do that completely or properly.

via TweetDeck (retweeted on 10:47 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender @adveisner Steedman: Short answer, just as syntax & semantics are a packaged deal, I think music is too, with a very different semantics. Syntax is tiny but ambiguity is much greater.

via TweetDeck (retweeted on 10:46 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender @adveisner Steedman: Moreover, the class of grammars needed for relations between chords (with semantics of things like cadence) is exactly the same as is needed for natural language, including non-adjacent dependency.

via TweetDeck (retweeted on 10:46 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender @adveisner Steedman: Interesting discovery: This problem is big enough that it needs the same kind of statistical models used in search problem in parsing.

via TweetDeck (retweeted on 10:46 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender @adveisner Steedman: Have been parsing midi-output (pitch approximated as nearest semi-tone and duration) to work out key signature and identity of chords.

via TweetDeck (retweeted on 10:46 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender @adveisner Steedman: I believe that just as we got standard language from planning, we also got music at the same time. Have been working on this with a couple of very brilliant students.

via TweetDeck (retweeted on 10:46 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Even if seq2tree is psychologically real, we still face the supreme challenge of finding out what the universal semantic language looks like. SQL, GeoQuery, SPARQL are not proxies for the language of mind.

via TweetDeck (retweeted on 10:45 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Can they learn all the stuff in the long tail: non-constituent coordination, subject extractions, cross dependencies?

via TweetDeck (retweeted on 10:45 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Conclusion: LSTM and RNN work in practice, but do they work in theory?

via TweetDeck (retweeted on 10:45 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Skipping final section of talk: Whatever we do about that problem, the problem of semantics is harder. QA involves inference as well as semantics, and we have no idea of the semantic representation that will support it.

via TweetDeck (retweeted on 10:45 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: But the future probably lies with hybrid systems.

via TweetDeck (retweeted on 10:44 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: So my money is going to be on a continuing need for CCG parsers in tasks like QA where long-range dependencies matter.

via TweetDeck (retweeted on 10:44 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: But neural SRL systems still have trouble with long-range wh dependencies. Tried English and French subject extraction (with special syntax in both) in an NMT system and it failed.

via TweetDeck (retweeted on 10:44 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: (Knowledge graphs used to be called Semantic Nets, and didn’t work because the computers were so small.)

via TweetDeck (retweeted on 10:43 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: DNNs are faster because we don’t have access to a universal semantic representations that would allow children to induce full CCGs for their first languages.

via TweetDeck (retweeted on 10:43 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: But it’s very idiosyncratic (horrible knowledge graph languages), so it’s good to do with DNNs which is faster.

via TweetDeck (retweeted on 10:43 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: So go semantic parser induction for arbitrary knowledge graphs (like Freebase) (Reddy et al 2014). But surpassed by DNNs.

via TweetDeck (retweeted on 10:43 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Unreliability of those parsers is one reason we’re truing to end-to-end methods. (EMB: Really? I thought it was because CS researchers like to ‘learn’ the solution to the whole thing.)

via TweetDeck (retweeted on 10:43 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: However, limited by size of available training data. About half of the errors in wh dependencies come from lack of info in training data.

via TweetDeck (retweeted on 10:43 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Planning is a search problem through a labyrinth of possible states, and works with the same algorithms as parsing (CKY & simliar). So we have the infrastructure to hang language onto.

via TweetDeck (retweeted on 10:42 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: (Examples of more and more complex animal planning, and how they relate to the combinators.)

via TweetDeck (retweeted on 10:42 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: So good that you can do the rest of the parsing exhaustively e.g. with A* search (shout out to Mike Lewis).

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: CCG also particularly well suited to parsing with supertagging front ends. Finite-state, Markovian models that probabilistically assign tags giving info about what the word expects to combine with as a precursor to real parsing.

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Ambiguity wasn’t a problem because there is so much already and we already need the statistical models.

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: However, the supposedly spurious constituents also show up in coordination and intonational phrases. Any grammar with the same coverage will have the same degree of non-determinism in the parser.

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: CCG initially assumed to be hopeless for parsing because of massive spurious ambiguity.

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Vijay-Shanker and Weir 1990 proved the “shared stack” claim of Ades and Steedman (1982).

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: For n=8, around 80% of the permutations are non-seperable. Useful for alignment in MT!
(And 8 is not large for the elements of a sentence.)

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: 21 of the 22 permitted are attested in some language, but the two forbidden ones are among the unattested three (Cinque 2005, Nchare 2012) 1 in 100 chance of this happening randomly.

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: It’s the B2 rules that give CCG greater than context-free power, but still less powerful than movement.

via TweetDeck (retweeted on 10:25 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: All long-range dependencies are done by contiguous reduction of a wh- element with an adjacent non-standard constituent with category S/NP, formed by rules of function composition.

via TweetDeck (retweeted on 10:23 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Combinatory Projection Principle: what’s in the lexicon (like combination direction) can’t be overridden by the combinatory rules.

via TweetDeck (retweeted on 10:23 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: CCG lexializes all bounded dependencies and all syntactic rules are combinatory. CCG locks together syntactic and semantic categories.

via TweetDeck (retweeted on 10:23 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman:
Computational linguists realized that they were spending all their time implementing exceptions to limit the search through such expressive grammars.

via TweetDeck (retweeted on 10:11 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: And the field fragmented. Linguists abdicated responsibility for “performance” focusing on “competence”. Psychologists became agnostic about grammar (and I think they still are).

via TweetDeck (retweeted on 10:10 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Can model abstractive in applicative systems by making abstraction a primitive operation (lambda) or in terms of a collection of operators on strictly adjacent terms: combinators like function composition.

via TweetDeck (retweeted on 10:10 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: But it doesn’t follow from that that we have nothing to learn from linguists! (EMB: 💯) In fact, if I knew more linguistics, I wouldn’t have made such a mess of CCG.

via TweetDeck (retweeted on 10:05 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Why does natural language allow discontinuity?

via TweetDeck (retweeted on 10:01 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Ades & Steedman (1982) suggest that the same stack can be used to characterize both long-range dependencies and recursion. This is the idea behind the first CCGs.

via TweetDeck (retweeted on 10:00 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Hard to see how to generalize to multiple, crossing dependencies. If the HOLD register is a stack, the ATN becomes a two-stack machine, i.e. a Turing machine.

via TweetDeck (retweeted on 10:00 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: ATN parser reduces all unbounded dependencies to local operations on registers.

via TweetDeck (retweeted on 10:00 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: The starting point for this development: the central problem is discontinuity, i.e. non-adjacent or non-projective dependency. Ex with coordination and relative clause.

via TweetDeck (retweeted on 10:00 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman:
Soon after, this consensus fell apart. Chomsky saw that transformation rules were too expressive to have explanatory force. Psychologists realized that their measures of processing difficulty didn’t resemble transformational derivational complexity.

via TweetDeck (retweeted on 9:53 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman:
In the 1960s, theoretical linguists, psychologists and computational linguists (and the AI community) saw oursselves as working on the same problem.

via TweetDeck (retweeted on 9:53 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Psychologists had always insisted that competence/performance can’t be completely divorced. No point in having grammar w/o parser & vice versa.

via TweetDeck (retweeted on 9:52 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Have exasperated the mathematical linguists by changing the formalism to match what I was discovering.

via TweetDeck (retweeted on 9:51 AM, Jul 18th, 2018 via Twitter Web Client)

emilymbender Steedman: Chose that title to stress that CCG has always been an empirical grammatical formalism.

via TweetDeck (retweeted on 9:51 AM, Jul 18th, 2018 via Twitter Web Client)

AlvinGrissomII During this interesting talk by Anton van den Hengel, shortcomings of DL seem to be weaknesses in ML in general. My impression is that since CNNs were such a jump for vision (much more so than for NLP), the assumption in vision is always to learn w/ DL. Not so in NLP.

via Twitter Web Client (retweeted on 9:06 AM, Jul 18th, 2018 via Twitter for Android)

"Kartoffeln sind toll, aber irgendwie ist mir die Konsistenz nicht eklig genug." -- Erfinder der Kartoffelklöße

via Twitter Web Client

yoavgo Indeed! Lexical/category knowledge is very important!

Today 14:45 (plenary) @VeredShwartz will present our work showing that SOTA end-to-end neural SNLI systems fail to acquire such knowledge. twitter.com/emilymbender/s…

via Twitter for iPad (retweeted on 8:24 AM, Jul 18th, 2018 via Twitter for Android)