• 0 Posts
  • 112 Comments
Joined 8 months ago
cake
Cake day: February 27th, 2024

help-circle






  • Well I did clarify I agree that the overarching point of this paper is probably fine…

    widely accepted linguistic standard

    I am not a linguist so apologise for my ignorance about how things are usually done. (Also, thanks for educating me.) But on the other hand just because it is the accepted way doesn’t mean it is right in this case. Especially when you consider the information rate is also calculated from syllables.

    syllable bigrams

    Ultimately this just measures how quickly the speaker can produce different combinations of sounds, which is definitely not what most people would envision when they hear “information in language”. For linguists who are familiar with the methodology, this might be useful data. But the general public will just get the wrong idea and make baseless generalisations - as evidenced by comments under this post. All in all, this is bad science communication.




  • So I did a quick pass through the paper, and I think it’s more or less bullshit. To clarify, I think the general conclusion (different languages have similar information densities) is probably fine. But the specific bits/s numbers for each language are pretty much garbage/meaningless.

    First of all, speech rates is measured in number of canonical syllables, which is a) unfair to non-syllabic languages (e.g. (arguably) Japanese), b) favours (in terms of speech rate) languages that omit syllables a lot. (like you won’t say “probably” in full, you would just say something like “prolly”, which still counts as 3 syllables according to this paper).

    And the way they calculate bits of information is by counting syllable bigrams, which is just… dumb and ridiculous.