If you’re anything like me, when you watch this, you’ll be forced to marvel at the absolute insanity of the whole thing.
Dr. Robert R. Redfield, MD was a colonel in the US Army, has a distinguished career as a physician, medical researcher, and public health expert. One of his primary areas of experience and expertise is virology and immunology, and he has years of experience studying the treatment of infectious disease in clinical settings. He held a tenured professorship in the medical school of a highly-respected American university, and has served in numerous advisory roles to a wide range of federal government agencies. All that was before Donald Trump appointed him to be the director of the Centers for Disease Control and Prevention and the administrator of the Agency for Toxic Substances and Disease Registry.
In that role, Dr. Redfield recently advised a Senate panel that even with an optimistic vaccine timeline, the general public would not be inoculated until the summer or fall of 2021. He also told them that masks could be a more effective protection against COVID-19 than the vaccine. If it wasn’t clear from that paragraph above about his background, this guy isn’t just a doctor. He’s a respected virologist, a prominent expert on the clinical treatment of infectious disease, and the head of a government agency — chock-full of infectious disease experts — whose whole purpose is to protect public health and safety through the control and prevention of disease. That’s the guy who provided clear, concise, and well-informed answers to a Senate panel on the anticipated availability of a vaccine and on the medical efficacy of face masks.
After Redfield’s statements got considerable media attention, Donald Trump told reporters, “I believe he was confused” and insisted a vaccine could be available in weeks and go “immediately” to the general public. And he claimed (with usual Trump confidence) that a “vaccine is much more effective than the masks.”
I get that Trump’s supporters believe he’s a great leader. I get that they’re prepared to overlook his moral failings, that they don’t see him as a liar because it appears to them that the folks relentlessly accusing him of lying have an obvious axe to grind. Sure… that sort of thinking is evidence of a hyper-partisan commitment to the MAGA movement and as such defies some rationality and logic… I’m just saying I get it.
But what I don’t get — what’s absolutely breathtaking to me — is the idea that anyone believes Donald Trump when he stands in front of reporters and says with a straight face that Dr. Robert R. Redfield, MD “was confused.” Even if you like him, this is a guy who has had trouble pronouncing the word ‘Yosemite’ and who has said (on tape!) that his preference is to present a positive, optimistic outlook in his public statements about the pandemic (and that’s a generous read of his comments to Woodward). That guy, who happens to be running for re-election and therefor has a clear interest in presenting himself successfully leading national efforts to defeat the pandemic, is telling you to believe his own assessment instead of believing the “confused” official statements of the respected physician, virologist, and public health expert (that he appointed!).
I really do mean the word “breathtaking” — it literally takes my breath away when I try to wrap my head around the fact that anyone is inclined to believe Trump when they hear him say these things. Understanding how this guy is still a viable candidate for any public office either requires serious mental gymnastics or is cause for deep depression. (Or it’s both.)
Here’s my takeaway: There can’t be a better illustration of our country’s brokenness than watching John Berman and CNN be accused of unfairness, bias, and partisanship for pointing out that the director of the CDC, Dr. Robert R. Redfield, MD, is a reliable expert on vaccines and public health policy and that Donald J. Trump is not.
From yesterday’s Washington Post: One space between each sentence, they said. Science just proved them wrong.
Some reactions:
As has become commonplace, the headline is a little overzealous. The scientists behind the study probably wouldn’t use such strong language, and the rest of the article is a little more cautious in the language it uses when drawing conclusions from the research.
The researchers used a fixed-width/monospace typeface. To say that misses the point is an understatement. Even most of us one-space zealots admit that two spaces makes sense for monospace type.
One of the study’s authors says it’s still reasonable to infer from this that their results would also apply to proportional type, but her reasoning only makes sense if you don’t understand how fonts work, or the real reason one space makes more sense:
…the point of double-spacing is to make up for how monospace type looks weird and janky.
It’s about aesthetics.
The “benefits” of two spaces after a period were only observed in study participants who… wait for it… are people who usually type two spaces themselves. Maybe they didn’t actually learn anything about typography or font legibility, but rather about people being stuck in their own habits.
Major kudos to the Post article’s author, Avi Selk, and to whoever was responsible for formatting the online version. The piece uses a monospace font and all sorts of crazy spacing tricks to literally show instead of just tell. It’s thoughtful, creative, and very effective.
(And you gotta love the note at the end, which is — ironically enough — a nail in the coffin of the two-space argument: “Note: An earlier version of this story published incorrectly because, seriously, putting two spaces in the headline broke the web code.”)
Sorry, but the “science” doesn’t prove anything here. Lifehacker’s take on this is right on: “No, You Still Shouldn’t Put Two Spaces After a Period.”
Though it’s sometimes called a “superpower,” hyperfocus is a classic challenge for adults with ADHD, and it’s particularly problematic when it’s detrimental to sleep.
I’ve mentioned my ADHD here before, but I’ve been reluctant to share anything particularly specific or personal about it because, well, I’ve kind of felt like it was no one else’s business.
There’s a part of me that always thought sharing more would be beneficial, since there’s so much misinformation out there, and the only way to help people understand is to actually help them understand. Also, though I’m not interested in turning this space into a stand-in for therapy, it’s clear that there’s a potential for personal benefit in processing my own thoughts and struggles in writing.
This all occurred to me when an article critical of Gary Vaynerchuk’s philosophies on work appeared in my Medium feed. Though it wasn’t written about ADHD at all, I read it through that lens. (Because: That’s my lens. Or, rather, that’s one of my lenses.)
I don’t think I’d ever heard of “Gary Vee” before reading the article, and everything I know about him is from the reading/Googling I did after reading it. So my reactions here aren’t so much to him, since I don’t really know anything about him, his worldview, or the advice he gives people about business, work, and success.
The article on Medium, “Gary Vaynerchuk is Trying to Kill You,” is a reaction to a statement from Vaynerchuk, from his book, Crush It!: Why NOW Is the Time to Cash In on Your Passion:
turn water into wine
Here’s the deal: if you want it badly enough, the money is there, the success is there, and the fulfillment is there. All you have to do is take it. So quit whining, quit crying, quit with the excuses. If you already have a full-time job, you can get a lot done between 7:00 P.M. and 2:00 A.M. (9:00 P.M. to 3:00 A.M. if you’ve got kids), so learn to love working during those predawn hours. I promise it won’t be hard if you’re doing what you love more…
The article’s author, Jon Westenberg, argues that the behavior described above is problematic because sleep is so important.
But I read it and thought to myself: Wow. That’s an incredible description of self-destructive ADHD behavior.
To explain what I mean, we need to start by clarifying the common misconception that ADHD is about not being able to pay attention. That’s not at all what it is — and that misconception is perhaps the most problematic way in which the general public misunderstands ADHD, especially in adults.
Rather than not being able to pay attention, adults with ADHD have trouble regulating their attention. (You might say it’s a “maldistribution of attention.”)
ADHD is a disorder of the brain’s executive functioning abilities, among them “organizing, prioritizing and activating for tasks.” That means there’s a problem in the part of the brain that can (among other things) differentiate between tasks and activities related to long-term (delayed gratification) goals and those that lead to immediate stimulation.
And though “stimulation” can mean the kind you might think — like what comes from caffeine, sex, exercise, action movies, getting lots of likes on Facebook — it actually means anything that releases copious amounts dopamine. And a task that keeps that flow of happy hormones flowing for awhile by providing legit intellectual stimulation… well, that’s just the ticket for the ADHD brain.
So: Working on an exciting and interesting new work project does the trick. That’s why adults with ADHD actually have no problem paying attention to tasks that are stimulating to their brains (at least in that moment). In fact, one common characteristic of ADHD is the tendency to go into a state of “hyperfocus,” in which we’re able to dedicate total and complete attention to a task.
It’s a euphoric experience of completely losing yourself in something. For me, it tends to be creative projects. I find myself in a state of recognizable happiness when I immerse myself in the details of graphic design or well-organized CSS, finding a sense of “flow” (see: Csikszentmihalyi) and “one-ness” with the artistic endeavor. (There’s a reason some people say that this aspect of ADHD is a “superpower.”)
But that euphoria is short lived when we finally snap out of hyperfocus to realize we’d totally lost track of time and must deal with the consequences of neglecting all those other tasks that needed to get done.
The problem with having an inability to direct or manage our attention is that we go right to the most “stimulating”/interesting/exciting task, skipping right over the step where we stop and ask whether that’s the right task to be doing at that moment, or whether the time we’re spending on that task is consistent with its importance.
(And even if we don’t skip that step and are able to recognize — cognitively — that there are more important things that need to get done, we have immense trouble bringing ourselves to get started on those more-important-but-less-stimulating things.)
All of that is just background. And though I’d usually be careful not to take so long to get to the point, I think it’s important context for my reaction to that book excerpt.
For an adult with ADHD, Gary Vaynerchuk’s advice is extremely attractive. And it’s also a recipe for disaster.
I say that for a two reasons.
The idea of using the late-night hours to work — when everyone else is asleep and there’s nothing else demanding your attention — is a perfect technique for falling into a state of hyperfocus. For someone capable of focusing their attention with intentionality, I suppose that could be a good thing, since having the time and quiet to focus on otherwise-backburner projects is an attractive opportunity. But if you can’t direct your focus, a quiet and uninterrupted stretch of time means you can easily spend those early morning hours on unimportant tasks.
When I was a (very nerdy) teenager, that meant spending a whole night applying perfect semantic HTML (with intertextual hyperlinking) to a complete database downloaded in plaintext from some sci-fi bulletin board that listed every ship to ever appear or be mentioned in Star Trek, TNG, and DS9 (well… the first season of DS9). Sure — I taught myself HTML in the process and I understood the value of semantic markup before it was hip to talk about. If I’d actually stayed committed and kept learning, it would have been a useful endeavor. But I barely looked at HTML again until after college (and I fell out of love with Star Trek sometime mid-Voyager), and in the meantime, I was a ninth grader who wanted to be a journalist and I was barely passing English. Had I maybe spent just a little of that time doing a small percentage of my homework, it would have been beneficial.
(I’m not saying the quiet nighttime was dangerous because my parents weren’t there to nag me to do my homework. The problem is that the lack of regular activity around me — noise, light, movement, people asking me questions, the sound of someone snacking in the kitchen, the sound of the TV in the other room — meant that it was especially easy to slip into an especially deep state of hyperfocus. Could I slip into it despite all those distractions? Definitely, if the task proved to be so stimulating that I could shut out everything else. But a listing of every obscure spaceship ever uttered in filler dialog by Denise Crosby or Jonathan Frakes would probably have not sucked me in quite so deep during regular waking hours.)
There are some experts who’ve made the claim that ADHD is, at its core, a sleep disorder. I’m not sure if that’s true — it’s not the prevailing mainstream opinion — but it’s based on the notion that ADHD is related, perhaps on a pretty deep level, to sleep dysfunction.
I won’t get into the details, mostly because I’m not an expert and I don’t think I understand all the intricacies. But the simplified version is that sleep is an issue for adults with ADHD. We tend to sleep weird hours, don’t necessarily always get “good” sleep, stay awake later than we should, have trouble waking up in the morning, and when we do wake up we struggle to transition from sleep to true wakefulness. And common ADHD medications can be a double-edged sword in this regard, since stimulants can be helpful in many ways, but just a slight mis-dosage or mis-timing can severely throw off your sleep schedule.
It’s also clear that one particularly effective strategy for managing adult ADHD is to get regular, adequate, restful, restorative sleep.
On the other hand, when we don’t get decent sleep, we tend to have a much harder time managing all the challenges that come with ADHD, even if we’re aided by medication, coaching, therapy, whatever. ADHD mixed with inadequate sleep leads to a downward spiral.
So the notion that it’s a good idea to stay up late, getting a few hours less sleep in order to focus on “work” — which is so attractive to adults with ADHD since we have a deep, physical/chemical attraction to the euphoria of hyperfocus — has the potential to be ruinous because those few hours of sleep are just too valuable.
I don’t blame Vaynerchuk for it. For folks without ADHD, his suggestion to get work done in the “predawn hours” may or may not be good advice. (Though Westenberg is probably right that it’s not.)
Rather, my point is that those of us with ADHD have to be vigilant to resist the draw of attitudes like this one, since this thinking can cause so much damage. For adults with ADHD — at least for this adult with ADHD — that damage means you can find yourself in a deep hole out of which it’s tough to climb.
A guy shot up a church in Texas today. Same shit, different day. Sigh.
If you’re pissed about the gun control debate (or lack thereof), sick of politicians who pray for the victims but don’t do anything to prevent it from happening again, disgusted by the polarization of public discourse in our country…
Stop posting about it on Facebook. You’re making it worse. We’re making it worse.
Because here’s the thing (and you know this): You’re typing into an echo chamber. No one who matters can hear you. Your heartfelt rant, your clever-yet-sad statements about politicians’ inability to act, or your tearful pleas about the tragedy of AR-15s… the only people reading them are people who already agree with you.
And it’s probably worse than that. The more we all post this kind of stuff, the better Facebook’s algorithm gets at making sure we don’t interact with anyone who disagrees. Every time we do this, we widen the chasm between red and blue, and we help foment the very things that are destroying America.
And while it’s easy to blame Facebook, let’s be honest with ourselves. We’re the ones killing our democracy. Because we eat this stuff up. We love having our own feelings validated, our opinions affirmed, our worldviews confirmed to be correct.
But if you want this madness to stop, if you want to actually do something about the evil madman in the White House, if you care about gun legislation and women’s right to choose about their own bodies and an economy that doesn’t just serve the rich and people’s right to marry whomever they love… don’t post about it on Facebook. When we do, we’re not just wasting our breath. We’re making it all worse by digging ourselves deeper into our trenches. We’re giving Trump and the Russians and Fox News and InfoWars fertile ground to sow mistrust and disunity and polarization.
Instead, go out and talk to someone who doesn’t share your views. Write checks to candidates in contested districts, or volunteer your time to make phone calls for them. Go to a gun store and learn something about these things you want to ban but that so much of this country can’t stop buying. Run for office. Just whatever you do… stop fueling the echo chamber.
(As for me, from now on Facebook is for snarky comments about sports, adorable pictures of my kids, and giving tech advice to friends. No more politics. Because I can’t trust Facebook’s algorithms not to screw up our country even more and I refuse to be a part of it.)
Doug Glanville on Jessica Mendoza in today’s NYTimes:
I root for Mendoza’s success because her journey inspires me, and many others, to think optimistically about what we can overcome despite the stereotypes attributed to our demographic boxes.
As a viewer, I value someone smart, insightful, and analytical above some dude who played for awhile. Obviously, they’re not mutually exclusive, and in the purest form of the two- or three-person broadcast team, there’s enough of both insight/analysis and experience that they complement each other. But too often that balance is off, and too few baseball talking heads are smart enough to inform anyone but the most casual fan.
Mendoza has raised the level of ESPN’s broadcast so as to (a) make it watchable (since she ups the quality of the banter, generally); and (b) frequently add nuance to my understanding of the game.
That latter part isn’t because she has Glanville’s experience with Wrigley’s outfield — because she doesn’t — but because she shows up better-prepared than anyone. In that way, it seems her experience as a journalist is far more important than her time winning medals for USA Softball. She’s able to tell us what scouts are saying about a pitcher, or how a player’s been trying to work counts better, or how a manager and GM came to make roster decisions. She respects her audience enough to have taken the time to do her homework, so she has something of value to share with us. (To Glanville’s point, she’s very Scully-like in this way.)
I don’t think that, as a barrier-breaking woman, she’s trying to be smarter or better-informed than her colleagues in the booth. Rather, I think she just is those things because that’s who she is, and I’m glad to read that at least one of those colleagues doesn’t feel threatened or insecure about it.
On Friday, I learned of the death of Jonathan Woocher, PhD.
Over the years, I’ve had multiple opportunities to learn with Jon. He always was incredibly kind to me, and even when we disagreed, or when someone in a workshop challenged him — usually on his take on some recent trend or another — he was never dismissive and always generous. And if he felt his own side needed defending, he always did so with intellectual rigor and humility. I’m grateful and honored to count him among my teachers. (And I can’t imagine I’ll ever stop feeling pangs of jealousy for his job title, possibly the best in the field, if not of all time.)
What others have said is true: Jewish education is stronger due to his intellect and thought-leadership, both of which will be missed. I’d only add that his menschlekheit will be missed, too.
Baruch Dayan haEmet.
Canon Rumors (canonrumors.com) says the new EOS 6D Mk II will be limited to 1080p video. It’ll have some upgrades — Bluetooth, a new image processor, 45 autofocus points — and it’ll cost $1999.
Sorry, but Canon should (and probably will) get slammed for this. To release a $2,000 (body only!) camera in 2017 that doesn’t do 4k is just inexcusable. For that kind of money you could buy impressive camera hardware from several other manufacturers that performs virtually as well for still photography and shoots video in 4k… more than two years ago. Canon can keep insisting on shipping devices that are clearly focused on either photo or video. And that might fly for pro equipment, where that kind of focus pays dividends. But the 6D is a consumer (or maybe a pro-sumer) camera, and as such it has to compete with Nikon, Sony, Panasonic, Olympus, and Fuji. All of them have figured out 4k for cameras at this price point, and Canon should, too. End of story.
Roger Ailes died today. I don’t rejoice at anyone’s death, but I won’t be shedding any tears.
By using his powers of manipulation, Ailes infected our political discourse with intensely deep cynicism, disregard for facts and respectful dialogue, and disrespect for education, diversity, and understanding. Though he is surely not singularly responsible, he was a primary architect of the dischord and disunity that permeates our national conversation. And he did it all for his own political and monetary gain.
He was not a patriot. He was a traitor. He sold out our country.
He hated almost everything America stands for. He hated Americans. And he hated women.
Though I don’t celebrate his death, I won’t forget the disgusting legacy he leaves behind. May we always shudder when we hear his name, a lasting reminder of just how much damage a single person can do to our country.
I attended Saturday’s protest at SFO against Trump’s anti-Muslim executiver order, which is where I shot the photo above.
Labeeb Ali worked as an interpreter for Americans in Iraq. That means he is a target for anti-American groups.
For that reason, lawmakers in the US made it possible for him to move here, providing the visa which allows him to leave the danger of Iraq, a danger magnified immensely by his association with Americans. So he has passed months of background checks, acquired that visa, has a current passport (not always easy in his part of the world), and he had a plane ticket on a flight from Qatar to Dallas. Once his visa application cleared and his plans solidified, he tied up loose ends in Iraq and sold virtually all of his property.
Thanks to the president’s indiscriminate, irrational, and quite possibly illegal executive order, Labeeb Ali was not allowed to board his flight.
Because the president couldn’t be bothered to consult with government agencies who know something about these issues or to take the time to develop immigration policy that has a chance to achieve his purported goals, his executive order blocks anyone — regardless of circumstance — from seven Muslim countries from entering the US. And that includes people like Labeeb Ali.
Never mind that if this man actually were a terrorist, he has already had ample opportunity to commit heinous attacks on large groups of Americans. And never mind that he faces a very real threat of violence from terrorists, tragically ironic considering he’s being prevented from entering the US because Trump and his supporters claim banning him is necessary to prevent acts of terrorism. Trump doesn’t care that this man is about as far from a terrorist as someone can be. Because he can’t be bothered to tell the difference between terrorists and all Muslims. They all look alike to him.
Trump picked seven Muslim countries where he doesn’t have business interests, and where we don’t have particularly deep diplomatic ties that could gum up the simple black-and-white of his plan. He banned immigration because his supporters are scared of “Islamic extremism,” though they aren’t concerned with how it might be a threat, nor do they want to be bothered with realistic solutions for preventing an attack on our soil. No… Trump just needed to show them that he’s keeping the Muslims out, as promised.
But here’s the thing: Our country made promises and commitments to folks like Labeeb Ali. It’s disgusting and dishonorable to leave them hanging — perhaps literally, it’s sad to say — because they’re in the way of Trump’s steam-roller of executive orders.
But it’s going to get worse. The word’s going to get around that America turned its back on commitments to people like Labeeb Ali. When Trump decides to send troops to fight ISIS, they’re going to have a hard time finding people willing to put their lives on the line to help Americans for a promise that they can resettle stateside. And that’s a reality that could very well cost American lives.
In the meantime, it’s a reality that is hurting people who deserve much better than the closed doors with which we’re greeting them.
“They have killed my dream,” Labeeb Ali told The Washington Post. “They took it all away from me, in the last minutes.”
Sometimes the president needs help signing his name, so he’s hired the ghost of George Harrison to help.
(I know. You didn’t think it was George Harrison at first. It’s confusing because at first glance it does look a lot like Eric Clapton in the picture. Common mistake. It’s definitely Harrison.)
[h/t Phil]
I used to go to a lot of movies, mostly because I really like movie theatre popcorn. Since kids, I get to two or three movies each year, at least in the theatre. I’m not sure what the last movie I saw in the theatre was, but it might have been Star Wars: The Force Awakens.
Though I loved that movie, I also felt like they spread on the just-for-nostalgia pieces a little too thick, and that there were entire scenes, and characters, and even aspects of the narrative that seemed to serve no larger purpose other than to evoke fond memories of a time when Star Wars existed but Jar Jar Binks did not.
And in that case, I didn’t mind the nostalgia that much for three reasons: First, it had been awhile since we had seen the Millenium Falcon, Han, Leia, Chewy, and Luke. We wanted to see them and feel reassured that the franchise was back on track after the disaster of eps. 1–3. Second, The Force Awakens was the first of a trilogy, and as such I’m ok that it gave in to that nostalgic indulgence because it felt like it was establishing itself — a new trilogy from a new director telling a new story — laying the groundwork so that the next films don’t have to be so overtly referential/deferential. Third, the Star Wars films (by which I mean, eps. 4–6) were never particularly subtle or mysterious in their allusions to each other, references to classical motifs, and willingness to use cheese and even camp. So I felt like Abrams’ over-indulgence in nostalgia was forgivable from a film that was so much about establishing the connection to the earlier series.
I almost felt the same way about Rogue One: A Star Wars Story. In fact, I felt even better about it — that the narrative and the filmmaking were referential and deferential while at the same time making much of the opportunity presented by being just a little outside of the main, (dare-I-say) sacred core storyline.
And then the last shot happened.
(If you haven’t seen it and you don’t want me to ruin the worst, most craptacular surprise ever… stop reading now.)
My problem is the cartoonish CGI/live-action mashup of a certain beloved character, and the fact it was visually dissonant (not to mention creepy-looking).
But my problem is also that it ruined the tone of an otherwise solid third act. The narrative arc had done its job: having sat on the edge of their seats for the entirety of the final battle, the audience watches in delight and horror as a cavalcade of heroes overcome an almost-impossible series of circumstances in order to set in motion the well-established heroism of Episode 4 (and beyond), only to face the inevitably of death with the certainty that the sacrifice was worth it. That final act — right up to and including the anonymous heroism of Vader’s light saber victims passing the thumb drive (that’s what it was, right?) along to its obvious eventual recipient — draws on the operatic essence of Lucas’ originals, while at the same time inhabiting a darker, more realist sensibility in which even honorable death is horrible and sad. The film was succeeding. We were there. I hadn’t closed my mouth or sat back in my chair in 20 or 30 or however many minutes it was.
And here’s the thing: we all know what’s going to happen with those Death Star plans. We don’t need reminding. And in case we do (even though we don’t), Jimmy Smits announces that he’ll send someone he trusts on a mission to find a certain desert-dwelling Jedi in hiding — a female someone based on his pronoun choice — and since pretty much the only thing (relevant) we know about his character is that he’s Leia’s adopted father, we know exactly who someone is.
At that point, with the Death Star plans on a ship that we’ve seen before, about to be in the hands of the person we know will put them into a certain short and beepy droid, the film could have ended. We’d seen what we needed to see. Of course, overstating the obvious is a hallmark of this franchise, so the story had to go one scene further.
They still could have pulled this off without ruining the movie. They could have shown the back of a female character garbed in white, and we could have even heard her voice. The camera didn’t need to show her face. We didn’t even need entirely original dialogue — our about-to-be heroine could have been preparing (or even beginning to record) her message to Obi Wan, taking us right up to the scene where we met her in 1977 (or whenever we were old enough to meet her for the first time).
But the scene we got instead was troubling on multiple levels.
Most obviously, the visual effect didn’t work. No matter how well Disney’s digital artists can pull off their CGI magic — and they deserve credit for all the ways they succeeded in this movie — there was no way we were going to feel good about an animated version of a character who we first meet moments later (timeline-wise) as a flesh-and-blood actress rendered on actual film. I won’t belabor this point as it has been well-trodden by reviewers, and because I suppose the degree to which the (semi-)animated character was (not) effective could be a matter of opinion. I mean, I supposed someone could have felt like it didn’t look horrendous. ((Of course, no one sane would think that. But I imagine it’s possible that some aesthetically misguided and/or very mentally-ill person could.))
Second, I take issue with Leia being played by anyone other than Carrie Fisher. Sure… if they ever want to show her at an earlier stage in life then I’m ok with a young actress playing the part — people change over time and I could live with the idea that a well-cast young Leia grows up to become Carrie Fisher’s portrayal. But Rogue One’s Leia is not a younger Leia. This is the Princess Leia Organa of Alderaan we know, traveling on the very ship and wearing the very dress she was wearing when we first met her, and on the very mission where she hides the Death Star plans on R2 and stands up to (our first encounter with) Darth Vader. That Leia will always be played by Carrie Fisher, and I cannot accept any other portrayal of her, even one animated to bear an uncanny resemblance to Her Highness, daughter of Anakin Skywalker and future general of the Resistance (and future wearer of metallic swimwear).
Third, cartoon Leia’s dialogue is unforgivable. The film is, up to that point, about sacrifice. And in forcing us to watch its protagonists’ deaths one after another, it puts a fine point on it. Truly tyrannical evil cannot be defeated by self-interested individuals (like the cowardly rebel leaders who initially balk at the idea of going after the Death Star plans). Rather, the Dark Side’s vulnerability is only exposed — literally in this case — when people come together, setting aside their individual needs (up to and including their individial need for survival) in the interest of the greater good. Those heroes, we come to understand, may die as martyrs, but despite their demise as individuals, they collectively live forever in the legacy they share. This film, then, justifies its own existence as a documentation of that heroism, bearing witness to the actions of the previously-nameless souls who perished so that Luke, Han, Leia, and Chewy can save the day and get the credit.
So when cartoon Leia shows up and announces that the point of the whole thing is “hope,” it does a disservice to the immediately preceding narrative. I suppose one could say that hope is the point, and that Jyn’s pre-battle pep talk on that topic is a statement of the movie’s central message. But other than those two mentions, this film isn’t hopeful — though they successfully get the Death Star blueprints to the Rebels, our heroes all die — because this is the story of Rogue One’s ill-fated martyrs, not the story of Princess Leia, her secret twin, and their estranged biological father.
Indeed, Leia’s story (or, the one in which she is a principle character) is very much about “a new hope.” And its fine to connect the final moments of Rogue One to the opening scenes of Episode 4. But we didn’t need her pithy line (or her cartoon face) to draw the connection — it had been drawn already when we saw the guys in the familiar helmets on the familiar ship, and again when we saw her dress from behind.
Leia’s face and her stupid line ruin an otherwise great Star Wars film. It’s fun to watch, well-paced, well-enough acted. It is composed in virtually every way as worthy of the Star Wars franchise. That’s most evident in the attention paid to the operatic score, the artistry of the sets and establishing shots of planetart landscapes, the sound effects of the battle scenes, and all those tiny details George Lucas trained us to notice (blue milk, pilot call signs, particulars of Rebel and Imperial ships, etc.). I even didn’t mind the CGI version of a character previously played by a now-deceased actor, or the unnecessary (and maybe poorly-timed) comic-relief cameo from a certain pair of familiar droids. Neither diverted from the narrative’s established direction, and the former example was less visually problematic than cartoon Leia because that character always had a certain dark cartoonish quality to him. ((The “uncanny valley” hypothesis — pointed out in Kelly Lawler’s critique in USA Today and Noah Berlatsky’s on qz.com — is right on with its suggestion that “human replicas that appear almost, but not exactly, like real human beings elicit feelings of eeriness and revulsion.” It works for the Grand Moff Tarkin character because he is eery and revolting.)) So his now-computerized presence felt less arresting in comparison to Leia, whose familiar softness and “realness” felt missing from the abomination we found in her place.
Point is: great movie until the last 10 seconds. And maybe a great movie despite them. I continue to be excited for what’s in store next, both from the films that will open with scrolling text and from this series of tangential “stories.”
Nice piece in Haaretz on Trump’s choice for ambassador to Israel. The author is a prolific and talented writer, capable of deftly wielding fact-based argument as an antidote to ignorance and extremism.
But in this case, he didn’t need much of his trademark intelligence or rhetorical flourish. Rather, he only needed his computer’s “copy” and “paste” commands. Because that’s all it takes to show that David Friedman is poorly qualified for the job to which he has been appointed and dangerous to the US and Israel due to his propensity to use both half-truths and slanderous lies as means to his partisan, extremist objectives.
(Friedman’s readers’ apparent inability to tell the difference between his falsehoods and the actual truth is troubling as well, though perhaps unsurprising.)
Gruber posted on Monday about how the non-traditional TV “networks” are whooping the likes of ABC, CBS, NBC, and Fox, at least when it comes to broadcasting high-quality, award-worthy content.
One can reasonably argue that the broadcast networks have always produced mostly garbage, but the real change is that the broadcast networks have completely missed the boat on the megamovie revolution — shows that “take television seriously as a medium”. That’s obviously true for dramas like Game of Thrones and Westworld, but I think it’s true for comedies, too. Consider the elimination of the laugh track.
He’s not wrong, except with the implication that the broadcast networks ever had a chance not to “miss the boat.” I’m not a TV-industry guy, and my understanding of that economic world is limited to having lived in Los Angeles for 30+ years, but it seems to me like the big three/four can’t be expected to compete with “networks” that play by different rules.
The broadcasting paradigm is based on a single foundation: revenue is dependent on ratings. That’s because revenue comes from ad revenue, and broadcasters’ ad rates are dependent on their ratings. The trick of the broadcast network’s economic model is to charge advertisers more for ad time than they spent to generate the ratings required to charge the advertisers those rates.
That model results in decision-makers who are risk-averse. Why take a chance on a new product that has a decent chance of failure — even when it’s a new kind of content, or even a little change like laugh track elimination, that you actually believe in — when you have a sure-thing that’ll be good enough?
Furthermore, the broadcast networks’ revenue model tends to reward popular taste (and its cousin, low-brow proclivity) over critical quality. How many episodes of CSI and its seventeen spinoffs did CBS air? (I’ll tell you: way too many.) Were any of the CSI franchises ever considered by anyone to be high-quality drama worthy of critical acclaim? Nope. But they kicked ass in the ratings for a long time, so they made CBS a huge amount of money.
And that’s why networks only care about Golden Globes and Emmies if winning them generates buzz, higher ratings, and (therefore) higher ad revenue. (And maybe because actors/directors/producers like winning awards, and happy actors/director/producers are theoretically good for networks, at least to a point.)
But that formula isn’t a guarantee. Plenty of critically acclaimed shows have been ratings duds. If NBC or ABC or CBS has a choice between an extra point-and-a-half in a key demographic and ten Golden Globe nominations, they’ll always pick the former. And that’s why they air the content they air, and they’ve conceded the trophies to Netflix, Amazon, and the cable networks that can (or at least hope they can) make money on high-brow.
None of this is news. This was the case back when HBO was raking in award hardware for The Sopranos. At the time, plenty of people let themselves believe that HBO was at an advantage because content creators could depict violence, nipples, and curse words. But their real advantage was always that they could afford to take a risk on serious episodic drama, which had the potential for a massively lucrative pay-off for them in the long term (in the form of subscribers who were hooked). The networks play a short game in which last night’s ratings matter right now. While this isn’t universal, and (I’ve been told by friends in the industry) it often isn’t quite so simple, this paradigm is still at the core of how the broadcast networks operate.
Gruber might be able to relate. He loves to tease the Android makers (and more so ignorant Wall Street folks) who go on and on about market share, and who bash Apple’s low performance in that particular metric. He’s observed all along (before anyone else really noticed) that Apple’s eyes aren’t on how many handsets or laptops they ship, but on how much money they make on the ones they do. (Because you can sell lots of phones if you don’t charge much for them. But that also means you won’t make much money.) So Apple is happy not to race to the bottom of the profit barrel in search of market share, because that market share doesn’t make them enough — or any — money.
Jon, here’s the thing: the networks may have missed the boat on the latest and greatest trends in TV. But their execs don’t care, because their eyes aren’t on Golden Globe statuettes, but on how much money they’re making for their networks. And spending a ton of money to make a show that isn’t a definite weekly ratings winner isn’t a smart play for them.
(For the record: I hate this economic model because I like quality programming. And most of my favorite shows right now are on Netflix or Amazon. I’m just saying that it the network broadcasters “missed the boat,” they did so because they made a conscious decision to stay on dry land. Is that cowardly of them? Sure. Does it result in bland, boring network television? Generally, yes.)
On Daily Kos, David Waldman suggests an outlandish way of getting Garland onto the Supreme Court in the brief period when outgoing Senators are gone and incoming Senators haven’t been sworn in. I initially blew it off as a silly fantasy. But…
Maybe the Vice-President and Senate minority leadership should be considering it. Why? Well. It’s been awhile since Dems didn’t at least have the presidency, but let’s do our best to try and remember the difference between the way Democrats and Republicans have behaved in recent memory when in the minority:
Under W, Democrats basically moped around, complained a lot, penned thoughtful and analytical op-eds, and in Congress they tried to be a thorn in the president’s side.
Under Obama, the GOP didn’t settle for being a thorn. They utilized every option, and stopped at absolutely nothing, to block the president and his agenda. Thorns? More like giant tire-popping spike-strips across the highway. They played the short game by blocking budgets whenever possible, and they played the long game by focusing on local and state elections, which allowed them to gerrymander themselves into a lasting majority in the House. (Indeed, as has been pointed out in a number of places ((My favorite.)), the Republicans are numerically in the minority, with an ideology that’s less popular than ever, yet they have managed to win both houses and the presidency and they’re walking around saying they have a “mandate.”) They have played the game — short, long, and everything in between — better.
They utilized a strategic and disciplined approach, and it’s paid off. Nowhere is that clearer than with the Garland appointment. And that’s why I think that Dems in the Senate should consider not dismissing the suggestion that they use some complicated procedural maneuvering to get Garland onto the bench.
Trying to move any other agenda item using this technique ruins the purity and genius of it. Only the Garland appointment allows the Democratic leadership to shrug across the aisle and say, “Well, you failed to do your Constitutional duty so you left us no choice. We tried to play fair.” And let’s also not forget: Obama appointed an older, fairly moderate jurist because he was indeed trying to play fair, and to appeal to moderate Republicans to buck their party’s leadership in the interest of the greater good. (Turns out “moderate Republicans” are an extinct species inside the Beltway.) So the Garland appointment has the additional virtue of being less purely partisan. ((While also being totally, completely, unambiguously partisan. It’s a Supreme Court justice who’d be a tie-breaking vote for chrissakes. This is about abortion, Citizens United, marriage equality, and tons more. Of course it’s partisan.)) Dem lawmakers would be throwing a Hail Mary to get a moderate on the bench, not a hyper liberal.
And they can also say: “We just wanted to give Justice Ginsburg the opportunity to retire on her own terms without having to worry quite as much about the influence of the fascist bible-thumper who will replace her.”
GOP lawmakers’ actions in the past couple years certainly opens them to the accusation that they put party before country. ((See: the budget maneuvering that put the country’s credit rating at risk.)) Maybe Dems might not want to emulate that behavior. But here’s the thing: that stuff didn’t hurt Republicans at the polls. And more importantly: They now have both houses and the presidency, which places on them the burden of leadership. That burden, as the GOP proved when in the majority, is not incumbent on the minority, whose lack of power leaves them with no choice but to resort to extremes. (Unless the majority actually cares about partnership. Ha.)
Nonetheless, this won’t happen. Even after the Republicans had the chutzpah to sit on a Supreme Court nomination for the better part of a year, Senate Dems won’t have the chutzpah ((Or the extremists. The Tea Party did the GOP a big favor by doing the dirty work and letting the main party establishment stay insulated. See: Ted Cruz.)) to beat them at their own game.
Anyway, I’ll stop pontificating and get to the point:
I can think of no better way for Biden to kick off his 2020 run — and to set the tone for standing up to Trump/Ryan/McConnel for the next four years — than to go out having had more influence as vice-president in his last few days than any who has ever held the office did in their entire terms.
If there’s anyone who can pull it off, it’s Joe.
Tablet’s editorial board says AIPAC fails to represent both the left and the right when it comes to advocating for Israel on American Jews’ behalf.
The invitation to Trump is a symbol of what AIPAC has become — an organization staffed by mid-level incompetents who disgrace our community with their evident lack of both political savvy and moral sense. Let’s be frank: Some of us would be comfortable with a bunch of back-alley political knife-fighters whose only cause is the active defense of the Jewish people, while others want leaders devoted to making sure that our communal goals embody universal morals and social-justice values—regardless of how this might play on the geopolitical chessboard. Whichever camp you find yourself in, one thing is clear: What we have now in AIPAC is an organization with the failings of both, and the virtues of neither.
Headless Community in Bottomless Spiral
This is a fascinating piece of political rhetoric. The Tablet editors are saying that both sides can agree AIPAC is a poor representative of the American Jewish community, and then make their case from each side.
If they are able to step away from the partisanship and actually offer cogent analytical insight into AIPAC’s failings on both the left and the right, then that’s admirable and useful. But the problem is that virtually no one (at least no one who is actively engaged in/with the Jewish community) is able to actually back away from the fracas and say anything that isn’t seen by one side or both as an unfair attack. In other words, I’m wondering if Tablet’s editorial team falls into the very trap into which they accuse AIPAC of falling: trying to be a voice for all sides and ending up being a voice for none.
Nonetheless, as an attempt to be analytical of AIPAC without staking ground (or, being transparent about your ideology but attempt to transcend it for the purpose of analysis), I think it’s a good try, and a thoughtful, intellectually deft, and interesting one at that.
But…
At the same time, despite some strong language attacking AIPAC leadership (which we’ll get to in a second), the authors seem to be dancing around the point they really want to make: this is entirely about the organization’s leadership, or lack thereof. I think that’s a fair point to make, especially if you can support it with a well-reasoned argument. But a problem with the Tablet editorial is that its authors hint at having a well-reasoned argument to back up their claims, but it’s hard to believe them when (a) they don’t present much evidence of organizational chaos to support their claims ((By “evidence,” I mean thoughtfully-presented factual information that supports their claims, not, “AIPAC failed to stop the Iran deal… Can’t those screwups do anything right?”)), and when (b) they take numerous cheap shots and engage in petty ad hominem attacks ((Exhibit A: “…an organization staffed by mid-level incompetents who disgrace our community with their evident lack of both political savvy and moral sense.”)) on AIPAC leaders.
It should be fair game to claim that specific people lack political savvy or that they have exhibited behavior that calls their moral sense into question, especially if you support those claims in a manner that’s convincing or at least intellectually honest. But calling unnamed AIPAC employees “mid-level incompetents who disgrace” the community that they’ve dedicated themselves (with presumably best intentions) to serving? That statement Trump-esque diss, a petty and rhetorically lazy turn of phrase that must have felt cathartic and wonderfully naughty to type into the essay’s first draft, says more about its author than its subject. It undermines the editorial board’s entire point (as do the other cheap shots sprinkled throughout), and it should have been excised before an editor clicked “Publish.”
And also, it’s mean. I believe in the important practice of a publication’s editorial board writing with one voice, especially on important issues like this. But it comes off looking like cowardly bullying when an unnamed writer (writing on behalf of a seemingly faceless editorial team) attacks a group of individuals without naming names but with a nod and a wink that says, “We’re way too classy to name names but you know who we’re talking about, right?”
With all due deference to the folks behind the publication (for whom I hold an immense amount of respect and awe-filled admiration), Tablet’s typically erudite editors should be above that kind of shoddy writing, and as a publication that endeavors to elevate public discourse (instead of contributing to the absence of discourse down in the gutter on social media), it should be Tablet’s policy to steer clear of lashon hara.
Moreover, if the point is that the root of the problem AIPAC’s staff, then the natural solution is that the membership (who the editorial claims to stand with/for/behind) should act to replace said “incompetent” staff, since it’s incumbent on a non-profit’s employees to advance the mission articulated by the organization’s membership. Of course, the editorial’s stance seems to be that the problem is with AIPAC on the whole, so the suggestion that the organization is fundamentally broken makes sense. But in that case the shots at staff are both irrelevant and misplaced, since it’s the membership who made/let it happen (and if AIPAC is broken on a fundamental level, the problems surely run deeper than some “mid-level incompetents”).
If, however, the organization’s members and mission are still worthy of support, then the solution is an easy one: Get rid of the staff who don’t get it and hire people who do. Otherwise, Tablet ought to be blaming the thousands of people who donate to AIPAC, show up at AIPAC events, and partner with AIPAC in their own communities.
main photo credit: Photo Cindy (Flickr)