REAL Cameras


No offense, Apple. Or Google. Or any other smartphone maker.

But you are not going to be able to do this with your phone. At least not in the next several years, maybe longer.

The zoom alone is simply not possible with smartphone glass (i.e., the lenses on the phones). Computational photography only takes you so far, and optical zoom is something that requires more than an algorithm can produce. Will phones get there, eventually? I mean, I don’t see it, but when you throw the most brilliant minds and hundreds of billions of dollars at something, it seems that almost anything is possible.

The phone camera wars continue to rage, but in the meantime, I’m thankful for companies like Fujifilm and their astounding camera and lens technologies. I’ve been a proud, passionate owner of a Fujifilm X-T20 since 2018, slowly building up a small stable of lenses, and yesterday they absolutely annihilated the state of the art in the APS-C camera world with the unveiling of their new, flagship X-H2S (a mouthful, I know). Here’s an honest review by someone infinitely more qualified than myself, so you don’t have to take my word for it.

It’s pricey, sure. At $2499 for the body only, it’s far more than most would ever spend on a camera. However, like a pricey, spec’d out Mac Studio ($7999 if you take it all the way to 11), when you need something that will do what that incredible thing will do, you know you’re going to have to spend money for quality. And like an Apple Pro Display XDR, even though it is VERY expensive, it is actually a relative bargain when put up against other things that can achieve similar results.

Well done, Fujifilm.

Posting Via Ulysses with the MacBook Air M1

Beautiful Saturday morning of Memorial Day Weekend 2022, coming to you live from Arlington, Texas. Heading down to see the rest of the Texas Worths at my parents’ lake house, minus my own 2 “kids” (one just graduated from college a few weeks ago, the other with one year to go). Looking like a perfect, 96 degree sunny day, the sunscreen beckons.

Enjoy the break, America, if you are fortunate enough to have it. We’ve all got a lot on our minds, weighing on our souls, but there is no better time than this to hug the ones you’re with.

This WordPress integration with Ulysses is ridiculously intuitive. As is Markdown. Wanted to do a postscript in Italics after clicking the icon to Publish, to pull back and type these words before actually publishing, and rather than look up how to do it, I just hit the clover key on the MacBook and i for Italics. Yup, that was it.

Typing in Ulysses Markdown: iPad Pro M1 vs MacBook Air M1

Incredibly solid keyboard straight out of an old gear storage bin in the closet, no trackpad. I do like this keyboard experience better the MacBook Air, actually – UNEXPECTED!!!


Doesn’t seem as big onscreen on the iPad Pro as it does on the laptop, I think I like the look on the laptop more.

Also very unexpected!!!

Sitting down to type on the iPad. Love the springy bounce of the keys with the OLD standalone Magic Keyboard (not to be confused with the new Magic Keyboard case built for iPad); no wires, no physically tethered connection of any kind. Heels of my hands resting on the table, while the virtually flat keyboard seems almost a part of the wood upon which it sits; no cold aluminum laptop keyboard deck with a giant trackpad with which to contend. Very comfortable.

Wish the cursor in Ulysses on the iPad was located in the middle of the screen as it is on the Mac version, rather than the bottom of the screen. Surely that’s something I could change, right?

DUDE!! Enabled Typewriter mode in the iPad version and set Fixed Scrolling to Center. We’re doing it!!! (I’m no Ulysses savant, but that doesn’t mean I can’t love it, does it?)

Love the double hashtags for large font in Markdown

Which gets even bigger with single hashtag

triple hashtags result in slightly large and bold compared to normal

Then back to normal.

Display on the MacBook Air is plenty good enough, if not quite as good as the iPad Pro. And the deck of the MBA is so large that it is totally fine for resting my hands on it as I type. I’ve never had a false trackpad touch register, so no issues there.

Every experiment, be it an old SurfaceBook 2, an old iPad Pro 10.5, a new Surface Pro 8 or a new iPad Pro M1, leads to the realization that the new (to me) MBA M1 is totally fine. It’s really all I need, in the final analysis…however, as we all know, left brain “analysis” is only part of the equation. I’m still not convinced that the iPad magic fairy dust, which activates pesky intangible factors such as “feeling” and ”state of mind” and “freedom” and ”new and different” and other blue sky vibes, won’t ultimately win out against all logic. Would not be surprised in the LEAST if that’s how this plays out. But all things considered, in terms of what I intend to do with the machine, the MacBook Air M1 does a little more stuff, a little more like I’m used to doing it, a little nicer than my old devices, a LOT faster and for a LOT longer between charges, all while leaving a little more money in my pocket. The saga continues.

Thoughts on Linux, Mr. Robot, Infosec, and Hidden Reality

It all started long before Mr. Robot.

WarGames, maybe. By then, Atari and Space Invaders had taken over my world, saving quarters by playing video games on the family tv. A friend whose dad was a doctor had a portable computer in a briefcase, monochrome green text, circa 1981 or ’82. But WarGames in 1983 shined a bright light on the possibility of a curious smart kid connecting to, and changing, the whole world from home. You would need a computer, not just a video game console. And a 300 baud modem (faster, if you could get it). But as Matthew Broderick showed, once you had those, anything could happen.

Finding free, as opposed to pay by the time used, phone numbers to call and play text-based video games or search around for a backdoor to NORAD became an obsession, which led nowhere. I didn’t win the games, refusing to cheat with help from friends or spend the amount of time required to figure it all out myself (homework and chores and playing outdoors with friends and riding bikes to malls and sports were also required in order to live); I never found an unguarded entry into the national security complex of the U.S. (or any other nation for that matter). If I had, I would have been too terrified to go in. They might catch you, and if that happened, who knows what they were capable of? The “they” of course being “the government,” always the bad guy in these imagined scenarios. Which government? Soviet? U.S.? Federal? State or local? Something unknown, secret? Didn’t matter. The government was scary and bad; just ask E.T. Or Luke Skywalker.

Probably the closest I ever got to hacking into something I wasn’t necessarily supposed to be in was during freshman year at UT Austin. Almost no one had a computer in their dorm, but I did. Along with a dial-up modem. I noticed something when I was searching through the physical card catalog of one of the university libraries, cluing me in to the fact that an “online” card catalog existed. All I needed to do was find out how to access it. That I did, and once in, was free to look at library holdings from my dorm room.

Looking back, I am certain that anyone would have been allowed to do so, even though it wasn’t publicized as a student resource as far as I knew. But at the time, it felt like I had discovered something secret, something I wasn’t supposed to know, even though it was harmless. And I didn’t like that feeling. I thought, “hey, the library should try to keep people who aren’t supposed to be there OUT! What if somebody DID something?” I didn’t know it, but I was blue team long before the term likely existed.

The aversion to seeing things I’m not supposed to see has never ceased, but the thrill of making something do something other than its stated intention has always remained. Whether it was punching in a sequence of buttons on an early TiVo to display different menus and options, or installing a different OS on my computer, the feeling of doing something that most people would not attempt has an indescribable exhilaration that isn’t accessible on the well-worn path of doing what everyone does. That’s the only thing that explains someone who was attracted to the TRS-80, the Commodore 64, Macs at the time when the company was about to go under, BeOS, Sun Solaris, Windows Phone, and Linux. I wanted what other people didn’t; once Apple and iPhone became dominant in the social conscience, I was no longer interested. If you want me to run the other direction, all you need to do is catch on with the mainstream.

Two things that I’ve discovered over the past 2-3 years are Linux and Mr. Robot. These will NEVER catch on with the masses; they’re too strange, too out there. They require a LOT of open-mindedness and leavings of comfort zones. There are things in Mr. Robot that are just hard to watch, and I wouldn’t, if it were just to glorify them or for some other gratuitous motive. They are part of a reality that is hidden from me, but that nonetheless exists. It’s eye-opening, in the same way that seeking out what is actually happening in, say, Syria is. Americans will never see the full, tragic story, or really any of the story anymore, of what is going on in that part of the world unless they actively seek it out. Isn’t it strange how coverage of Middle East conflicts has completely ceased? Make no mistake, they continue to rage. We just aren’t being shown. It’s uncomfortable, it doesn’t sell. But we need to know, because if we don’t, then we can’t hold anyone accountable for the continuing government atrocities being committed by Syria, Saudi Arabia, and others (there we go with “government = bad” again) or demand change. Which is why eleven years in, nothing has really changed in their civil war, nor in Yemen.

If you’re looking to see how the other half (or 2%) lives, give Linux a try. Choose your fighter: I’ve gone with Ubuntu, the most mainstream of the distros. The one featured most heavily in Mr. Robot is Kali Linux, which comes with tools geared toward people who are accessing networks and systems that they may or may not have the rights to access, or “penetrate,” whether hired by companies as network security testers or for other, more nefarious purposes. If installing a second operating system is too much to ask, try watching Mr. Robot on Amazon Prime Video. And if all else fails, if you REALLY want to open your eyes, do a little online digging and take a look at what’s going on in the Middle East, in Africa, in Asia, wherever. You really have no idea, and that, dear reader, is completely by design.

Living In the Future: Windows Phone, Pixel, and iPhone

My daughter is home from her first semester at college. It’s wonderful to be able to hop in the car and run to Starbucks with her, just like the good ol’ days. Great one-on-one time, no worries, just her and me.

She did the driving this time, and I asked my phone a question about something that we were discussing. She immediately jumped all over me, in a good-natured mocking sort of way, for “talking to my phone,” which “NOBODY does, dad. Except you and two of my friends who always try to talk to Siri and it always fails.” This is a point I have made many times over the past couple of years with my Pixel and Pixel 2 phones: Google users use their voices, iPhone users don’t. Why? For the simple reason that Siri sucks and Google Assistant is great and voice is vastly superior to typing on a tiny glass keyboard (yes, even an XL phone screen is tiny compared to an actual computer keyboard).

This also reminded me of a post I did not write just after Thanksgiving. Walking by a big, printed canvas photo hanging on our wall that was taken in March 2014, I was struck by the memory of having shot it with my Lumia Icon, a Nokia Windows Phone that had an AMAZING 20 megapixel PureView camera with a Zeiss lens. The detail and color on that 16×20 print are spectacular. Again, this was almost 5 full years ago. The phone was white, just like the Pixel 2 I recently sold in exchange for a Pixel 3 XL, and the feel was pleasingly similar to the Pixel 2 in the hand. Along with the camera, the 441ppi 1920 x 1080 AMOLED display was simply stunning (Apple’s newest phone, the XR, has a 326ppi 1792 x 828 LCD for comparison), as was the responsiveness of the entire system. It was a 5-inch screen, an incredible jump in size from the iPhone 4S with Siri that it replaced, and handing it to my iPhone-toting family invariably drew a mix of mocking and disdainful reactions. “You are SO weird, dad!” But then I’d bark out a few Hey Cortana voice commands, and the phone would actually do what I told it to do. Miraculous! And oh, what Microsoft taught the world to do with typography as UI! Apple customers take for granted the use of different font sizes throughout the iOS experience, which replaced gaudy “skeuomorphic” (only die hard Apple devotees, as I once was, would be familiar with such terms) design elements of previous iOS versions. But Microsoft often shows the way for others to follow, thanks to its decades of time, talent, and treasure that have been poured into basic research including anthropological studies by actual Microsoft-employed anthropologists, and that was definitely the case with Windows Phone and its beautiful typography. People’s attention can be wrangled and focused by the simplicity of clean typography with overarching elements being bigger than the ones that can be further drilled down to.

It saddens me to contemplate what so many iPhone users miss out on, just because they have to use what “everyone else” is using. The things being accomplished by Google through software, through artificial intelligence, in computational photography, are insanely next-level, especially compared to Apple’s phone cameras. Most people know that “Google phones are better at low light photography,” but in practice, every time we are out at night, the iPhone users insist that we use my Pixel for the group pics. It is worth going out of the way to find and harass the Google phone guy into taking the picture, rather than anyone using their own iPhone. Which, of course, I am very glad to do, because nothing bothers me more than suffering through embarrassingly bad low-light iPhone photography when a simple year-old Pixel 2 would have made all the difference. The iPhone users are aware of the picture-taking inferiority of their devices, but mostly unaware of the other shortcomings. This is especially true for kids, teens, and college students, but also for run-of-the-mill people who simply don’t care about the latest and greatest. As long as it’s got an Apple logo, they know their star bellied sneetch status remains inviolate. As long as the message bubble is blue and not green, they are safe. As long as they are paying 20% price hikes over last year for the privilege of changes or improvements that are indiscernible to 99% of them, all is well. Yes, I know “trust” and “Google” and blah blah blah. But if anyone believes Apple is not collecting more than is known or suspected by its users, I would venture to guess that they are mistaken. No, it may not be as much as Google. And Facebook is beyond any of this, so I leave them out of the discussion entirely. Even if an iPhone user is giving their money to Apple instead of Google or Samsung or LG, they are still being tracked by every app or website they use on that iPhone, so it really does not matter in the big scheme of user privacy considerations.

What I am talking about is the pure joy of experiencing the latest technological innovations from the greatest technological innovators, Microsoft and Google. I really do wish Windows Phone had survived, because I truly loved the Windows Phone tile-based UI, with the ability to customize both the size and placement of individual app tiles on the screen, something that still cannot be done in either iOS or Android. If this is what we had 5 years ago, what would we have now, if full-fledged development had continued? One can only wonder.

Don’t Try To Make It Something It Isn’t

An IBM PCjr was not a Commodore 64. An iPhone was not a Blackberry. An iPad is not a Mac or a PC. The PCjr, while far more useful than a Commodore 64 for things that were rapidly becoming more important than easily programming sprite graphics in the exact same way it was done on the C64, or using an already-purchased Commodore cassette recorder to save programs, was a bitter pill to swallow for a 14-year old boy who cared nothing about IBM-compatible Okidata dot matrix printers. I didn’t care about writing papers that I could print out for school; I wanted to do what I had taught myself to do, the way I had taught myself to do it, and I needed a C64 for that. Soon enough, however, the capabilities of the PC changed the daily existences of millions, even billions, of people around the world, and the C64 (along with the PCjr) was eventually, inevitably, left for dead.

I also recall years spent in a corporate environment with people who were more obsessed with their Blackberries than any teenager with an iPhone. The thumb-operated physical keyboards blazed away at all hours of the day and night, at desks, during meetings, around the house. Then the iPhone came along, and it had NO KEYBOARD BUTTONS. How IDIOTIC! It may seem hard to fathom for those who did not witness it firsthand, but for a long time, many Blackberry loyalists completely eschewed iPhones. But what the iPhone COULD do, once understood, changed the daily existences of millions, going on billions, of people around the world in the years that followed, once again relegating a beloved technology to the graveyard of gadgets past.

Will the same be able to be written about the iPad vanquishing laptops with desktop OS’s when the annals of tech history are updated at some point in the future? No one can answer that (yet). I can state that I am banging out the letters that form the words which convey these thoughts on an Apple Magic Keyboard paired wirelessly via Bluetooth to a 10.5 inch 2017 iPad Pro, alongside an Apple Pencil which is soon to tap the Publish button in the WordPress interface above and to the right. I have the iPad oriented vertically, in portrait mode, rather than in the traditional pc landscape orientation that is preferred by so many. It is my personal preference to closely replicate the experience of writing in a portrait-oriented pad or notebook, which I am able to freely exercise due to the fact that this is an iPad, not a laptop. The device also affords me the unreasonably delightful experience of slowly flicking the page up and down with the Apple Pencil to linger over what I have written as I write, then to touch the screen with the Pencil tip directly where I want to make a revision, all without having to grasp a mouse and hold buttons down while some primitive, non-True Tone screen featuring Stone Age refresh rates jerkily scrolls up or down as it impatiently waits for a mouse button click to engage a disembodied pointer icon for insertion of a cursor to make a change.

For more than a few iPad users, the choice of device/screen orientation is often removed, dictated by the Apple Smart Keyboards attached to their iPads, forcing the horizontal layout of the screen. Many others simply prefer landscape mode, likely as a subconscious vestige of years of traditional Mac or PC use, or perhaps due to the iPad Pro’s split screen that can be enabled in landscape mode for exceedingly useful multitasking capabilities. In fact, as I sit here typing in portrait mode, contemplating said utility of landscape split screen while I ponder a synonym for the word “beautiful” that I  employed in the preceding paragraph, I just rotated the iPad into landscape and drug a Safari tab over to split the screen and look up a better word than “beautiful.” “Delightful” is much better. There, the change has been made.

And now, after scrolling through my words for one final pass with a screen possessing a buttery smoothness that is still, if you pause to consider it, almost impossible to describe with words, an incomparable experience that is available on no device other than the iPad Pro, I use the Pencil to touch the Publish button.

Apple, Google, or Microsoft?

For me, the answer is yes.

I try them all, I love them all, I hate them all. Ok, maybe I don’t hate any of them, but the products produced by Apple, Google, and Microsoft elicit some very strong emotions from their users, myself included. Being old enough to remember living through the rise of each of these tech titans, the hopes and dreams of new and better futures promised by them, allows one to look back and evaluate who was, is, and will be most likely to deliver. To do so, we need to look back at not only the founders, but also to the ones chosen to succeed those founders, and why.

Steve Jobs wanted to create beautiful things. He loved beauty, and thought the world would be a better place if it were filled with more beauty, produced by more people. The Mac was his solution to that problem. Another thing he loved was music – simply look at what he decided to name his company, after the record label of his favorite band, the Beatles. So he continually threw everything he could behind music:  sound on the Mac, iPod, iTunes, ripping and burning, Apple Music streaming, Beats and headphones, Apple Music (formerly iTunes) Festivals, headphone jacks in EVERYTHING, including the iconic ads for iPods featuring silhouettes of corded headphones worn by people listening to music on their beloved devices.

Bill Gates wanted . . . well, I’m not exactly sure what. But business dominance was his early pursuit, and he was comfortable with winning at any cost. So, perhaps his dream was to dominate in the nascent arena of computing, and to do so by outsmarting, or more specifically, outmaneuvering, his competitors. If the burgeoning world of corporate computing, with its logical extension of personal computing, were a game, he would lay it out as a strategic battlefield with pieces to be moved, escape avenues for adversaries to be cutoff, and unconditional capitulation the preferred outcome. Only the full weight and resources of the U.S. federal government and court system thwarted his ultimate victory.

Sergey Brin wanted something other than the Soviet Union. In other words, something other than a crumbling, totalitarian society paralyzed by corruption and the lies that were required to maintain that power structure. For him, born in Moscow and then transplanted to the United States and immersed in its institutions of higher learning and cooperation, information and the freedom to use it were all the power that he seemed to desire. That impossibly altruistic motivation for him and fellow Stanford Computer Scientist Larry Page was enough to drive the creation of a company that would grow to be potentially more powerful than Apple and Microsoft combined, and conceivably as threatening as the Soviet Union was, minus the nukes and troops. You might say that does not sound like much of a threat, but remember, the objective of the Soviet Union was not to annihilate civilization. It was to control it. To manage any threat that may be posed to the Soviet Union itself. No different, really, than any other country’s overriding concern. The most valuable resource the Soviet government had was its intelligence apparatus, which could identify and eliminate small or medium threats before they became larger and more dangerous to the state. The Soviet Union’s leadership would have done anything to acquire the information that Google now possesses on billions of people around the world:  where we go, who we talk to, what we say, what we buy, read, watch, and cheer for, the schools and classes we have attended, who taught them.

Somewhere along the line, something in each of these companies changed. And it changed when the original leadership was replaced. In the case of Apple, it remains to be seen whether the change was for the better or the worse since Tim Cook took the helm; yes, they are more protective of consumer privacy, but they are not as obsessed with providing the best tools with which to create, and they have approached or even surpassed Microsoftian levels of anti-competitive consumer lock-in behavior. Not to mention the seemingly trivial, yet really not, decision to eliminate headphone jacks not only from their own portable devices, but also by virtue of their market leadership, the headphone jacks of most of the important portable devices being turned out today. With Microsoft, it seems as though it is better now that Gates and Gates Jr. (Steve Ballmer, an even more hyper-competitive, less nerdy version of Gates) have been succeeded by the more cooperative, plays-well-with-others Satya Nadella. And for Google, we are all far worse off than we were before the founders brought on “adult leadership” in the form of Eric Schmidt, who introduced maximum profit as the main reason for Google to exist, rather than organizing the world’s information with a guiding principle of “don’t be evil” (which has recently been officially dropped by Google as a guiding principle as they pursue censored product offerings monitored by the Chinese government as well as the use of their internally developed AI advances for military purposes by national governments).

Where does all of this leave us? Well, it means that in order to figure out where we might be headed, it is really useful to understand where we came from. And in a world where so much of our daily existence incorporates certain devices and services, we can ask ourselves what we prioritize in those devices, and what we are willing to give up in exchange for what they provide for us. Will we give up more money? Camera quality? Some freedom of choice in how to do things? Interoperability with our families and friends? Privacy? That’s what I endeavor to explore and explain here. Not only for you, but for myself. As one who freely and happily employs technology and devices from all three of these extraordinarily powerful entities (I exclude Facebook from all of this for the simple reason that, given what has already transpired so publicly with that company, I can only recommend that people stay as far away from any Facebook product as possible; please know that I am exercising tremendous personal restraint in the measured word choice employed here, and that I do believe that it is just a matter of time before the company is really taken to the woodshed by various governmental authorities including those in the United States and Europe), I am keenly aware of the trade-offs that I make with each decision. I would like to share some of that experience and insight with anyone who is interested in it, with whatever small slice of finite attention he or she is able to spare.

Consider the iPad Pro 14.9

Builder of games, maker of apps, and hacker of things Steve Troughton-Smith tweeted a thought into existence outside of his own mind on Halloween, but I just came across it via 9to5mac today:


There’s a nice thread there, and it makes perfect sense. Just as the Air laptops came in 11 and 13-inch varieties and the Pros came in 13 and 15-inch, I believe that size specific delineation is the right one for the iPad lines as well. 11 inches is portable as hell, and I certainly do not begrudge those who rely on iPad Pros for their professions and either want or need a smaller size. For my money though, if there were to be just two sizes of Pro, 13 and 15 (or 12.9 and 14.9, whatever) seems right. The extra inches for photo, video, coding, going over blueprints or other digital designs with clients, and all other manner of creating, would be so in demand that even Apple must be aware and working towards that eventuality. If one size had to be dropped, I suspect it would be the 11-inch Pro, leaving us with a 13 and a 15-inch Pro, along with consumer sizes of 11 and something smaller. [I refuse to call it Mini; the word “Mini” is insulting to me, whether applied to iPads or the newest Macs, because that word connotes some lesser form of a thing. These devices are full-fledged, consummately capable machines, and naming them “Mini” is as ridiculous as calling something with nothing more than a larger screen size “Max.” Unless you are someone other than Apple, which, if Apple isn’t careful, is in danger of becoming.]