I’m working through the books which I’ve been carrying across the country over the past several months. Soon I’m moving to Colorado and getting my second dose of vaccine. Though COVID has not disappeared here or elsewhere in the world, I’m retiring the title “Pandemic Reads”. Hopefully I can make reading and recommending books a more regular part of my life.
What do I think about reviewing 19 books during a pandemic year? I feel like a determined reader could read two or three books in a month and be very thoughtful on GoodReads. So I under-developed this; I watched…
Facebook recently published “Casual Conversations”- 45,000 short videos with paid actors as a new benchmark with diversity along speakers’ age, gender, and skin color (Facebook classifies these speakers on the Fitzpatrick scale rather than race and ethnicity).
While downloading I was listening to a NewNaratif podcast about hijab rules in Singapore, and wondered whether Facebook included any hijab-wearing women in a diversity dataset. To make the question applicable to a wider audience, my research question is: how many speakers in the dataset wore a hat or head covering? What types are included in the dataset? …
Arizona opened COVID vaccine registration at their state-run facilities to all adults as of March 24th. By the end of the day, all major locations available in Maricopa County (which includes Phoenix) had updated their websites, too.
I searched for vaccines at CVS, Walgreens, Safeway, and Fry’s Food and Drug. A particular wrinkle to my schedule was that I am moving, so I’m looking for either the J&J one-dose vaccine, a second dose by April 17th, or a second dose in Colorado (which will be open for all adults in mid-April).
I was skimming through Twitter and an unrelated post about keras-tuner got me thinking, should we use hyperparameter tuning for ML fairness? And why was my immediate reaction so negative?
I don’t intend to resolve it here, but think arguments could be made for either side, making it good for an ML interview question.
I’ve delayed this post for a long time because Separate and my next Supreme Court read are each over 500 pages. I need to set goals for daily reading. What works for you?
You may remember Plessy v. Ferguson as the “separate but equal” case which was overturned by Brown v. Board of Education (and years of work by Black activists, and the Civil Rights Act of 1964). But I knew nothing about Plessy the person, where he came from, when the ruling happened, and how the Supreme Court case played out. …
After looking back at the start of Covid from early July, and checking in about ‘things that suck’ in late October, I realized another 4 months cover a full year — and here we are (almost).
In my October post, I didn’t even mention vaccines and asked if I believed my family could stay covid-free for another year. One older relative had asked if she would ever see the world go back to normal. Within a few weeks, Moderna and Pfizer trials proved >90% effective (much much better than expected). By the end of the year, we were discussing if…
I’m way behind on reading books for reviews, but here are some TV options:
In May 2020, I posted a project where I used spaCy and BERT to “flip” gender in Spanish sentences (un profesor viejo <-> una profesora vieja). This was useful to evaluate models’ biases or augment training data, but it was slow and dependent on hardcoded variables in my script. At the time, I suggested the next step would be using a neural network model (seq2seq) often used to translate or summarize the text.
In addition to bias and data, I’ve collected more reasons to use counterfactuals in any language:
One of the first steps in an NLP pipeline is dividing raw text into words or word-pieces, known as tokens. But what if you don’t have spaces to divide sentences into words?
People do write some spaces in Thai text, as you can see above, but they aren’t universal as they are in English. There is also no set punctuation to end a Thai sentence. This can cause confusion, or poetry, but humans are good at separating them in context. The difficult part, then, is getting computers to pick up on that context.
Recently I posted a benchmark summary for three Bangla language models and one multilingual model (Indic-BERT). I’ve bolded any models within 1 percentage point of the top score.
Indic-BERT and my own ELECTRA model performed well on Sentiment Analysis and News Topics, but notably worse on Hate Speech classification, not matching mBERT. What makes this task so difficult, and why does it affect models differently?
When I shared my results, the Indic-BERT team asked some questions and I went back to my original source for the data. …
Nomadic web developer and mapmaker.