Just got my car, with stick

•March 17, 2009 • Leave a Comment

I just got my 1995 Cavalier, which has a stick shift.  Dad can’t drive stick, so I had to drive it home… I can’t drive stick either!  But, I know how to drive stick, so I did it anyway.  There’s a little thing about practice, though… that part where you actually do it a few times so you can do it right?  I need that.

My Cavalier

My Cavalier

The car flashes the ABS light at me sometimes.  It comes on eventually, then goes off.  The brakes need fluid, possibly pads, possibly other work.  The car needs a catalytic converter.  I want to do a full fluid change, filter change, plugs, oxygen sensor, the works.  Pretty much I paid $2000 for a car with a 25,000 mile engine and 170,000 miles on it, and now have to get it in workable shape.  No big deal, right?

I have to shop for parts after the inspection.  The auto shops charge a premium for basic parts; I have to go pick up after-market parts, probably Beck Arnley or something, for cheaper.  Those OEM parts must come from who knows where, China maybe, air dropped, overnight; they cost $100 at the warehouse and $200 at the shop, plus labor to put them on!  All this means I might spend $1000, $1500, maybe more to get the car in workable shape.  I don’t have it, but it’ll come eventually, so I’ll try to keep the mechanics busy and bring in $800 or so a month.  Maybe I need a weekend job….

Controls

Controls

Anyway the car has no tachometer, and I haven’t driven stick before.  How do I change gears… somehow?  It just comes naturally.  Getting into first does not come naturally, however; I need a parking lot for this!  Just gotta learn how to drive–again–that’s all.  It’s a small car, I somewhat dislike the tinted windows, it could use a vacuum, and of course a little work.  The stick shift makes me nervous, but just so.

I’d rather have a better car.  After this I’m grabbing maybe a Miata or something cheap like that, when I have big cash reserves and can afford to go to school regularly.  Actually the Cobalt isn’t a bad car, just I hate all the fancy electronic everything in it and the automatic transmission sucks dog balls.  My Nissan had vents, you turn a knob and it moves a cover to change which vents give air; my Cobalt has a 24 way selector switch, and the computer reacts a second later to move the covers with a small motor or something.  The automatic transmission reacts about as fast, which sucks for driving.

So, maybe, just maybe, I might decide on selling the Cavalier in a few years and buying a 2005 or so Pontiac G5 GT in manual.  I can drive manual and that car (yes, the G5 is a rebranded cobalt) sucks in automatic.  The G5 GT looks fine though, pretty much a Cobalt Coupe, especially nice in black.  Other options include a nice AWD Nissan, or a sporty RWD car.  I don’t want a Miata as my daily driver, but owning a small car like that would work for a secondary.

Overall, the Cavalier is a step up.  It teaches me to drive stick, it costs less than the Cobalt, and I can rid myself of that huge debt and insurance payment.  In a few years I can sell it and almost recoup costs, putting me in a better overall position and allowing me to move forward with another car, with stick or automatic (probably stick).  I can also save money and go to school… so sweet.

Advertisements

Real Applications in EyeOS

•March 11, 2009 • Leave a Comment

I am considering a region-based rendering engine as a thought experiment.  Effectively, consider sprite-based or tile-based rendering but with variable sized areas.  This experiment aims to propose a simple way to transmit a visual display of a document through a Web browser– in other words, run OpenOffice.org in EyeOS.

First, we want to establish the scope of this engine.  This engine will build directly into OpenOffice.org itself, providing an alternate output display.  The software will describe itself to a system serving it through a Web browser (such as a PHP script), including its layout and rendered data.  It will describe the rendered area in pages, as requested; updates only happen to on-screen areas, and to the entire document height.  The method of running this engine will involve asking the binary to create a temporary UNIX socket or pipes and PID file, which gets noticed by the server software and handled accordingly to pass it to the browser client.

For a more focused example, let’s assume OpenOffice.org wanted to work with EyeOS to make the OpenOffice.org desktop client run through EyeOS without massive downstream via VNC, or a Flash plug-in, or bastardization through a completely new Java Script rendering engine.  OpenOffice.org would include code to work with this, when called by ooffice_web; EyeOS would have an application that calls local “ooffice_web writer ${USER_CONFIG_DIR}/lapps/” to execute.  EyeOS would further examine each subdirectory under ${USER_CONFIG_DIR}/lapps/ and verify the pid file and that that PID had /proc/$pid/fd/ containing the pipe files, or some such verification that that PID indicates the application you think it does; if this holds false it removes the directory.

EyeOS would use AJAX and a Java Script time-out to repeatedly poll an EyeOS display update script, which I believe it does anyway.  The application would specify a sliding interval, effectively giving the minimum time it expects between updates; EyeOS would follow this or its own minimum, whichever turns out lower.  For multiple applications running, EyeOS would take the lowest update interval needed, and only poll those exceeding their own update interval or its internal minimum.

In order to update, the application would make constant judgments about what regions to update, and how.  The update protocol would basically involve the EyeOS script opening the pipe or socket and requesting a screen update.  A flag for “Blank screen” would signal that EyeOS has no state; OpenOffice.org would send an identifier with the response.  An identifier would signal that an EyeOS thread has the last state sent to the given identifier; OpenOffice.org would just send the changes to the display.  EyeOS could also signal that the user made mouse clicks and key presses, or scrolled, or resized the window; in these cases, the contents of the document and/or the viewing window would change, and the application would send updates.

As for the viewing window itself, the application would send the general size of the complete viewing area to EyeOS.  OpenOffice.org would say the whole document consists of three million vertical pixels (more if you zoom in, less if you zoom out), and EyeOS would render a scroll bar for such.  The actual viewing area (scrolled down so far, scrolled right so far, so high, so wide) would dictate what OpenOffice.org sent; scrolling would produce a full viewing area update.  No need to render the full document to the screen for fast and easy scrolling, after all.

OpenOffice.org has its own rendering engine.  It uses its own font engine, its own image engine, and the like.  It displays things different from how your Web browser will display something formatted exactly the same using CSS and XHTML.  Because of this, the entire rendering has to present itself to the browser as graphics, not text and formatting.  This poses a huge bandwidth problem, which we can only solve with a specialized rendering engine tuned for the application.  In our case, pre-rendered, pre-cached fonts and graphics.

For a zoom level and font size, OpenOffice.org decides Times New Roman ‘A’ 14pt bold takes up so many vertical pixels exactly, along with a bunch of other things that determine an exact rendering.  It should not matter that the text uses 10pt font at 90% zoom or 9pt font at 100% zoom; the letters take the same size on the screen, and should render exactly the same.  This may not line up with real life, but it makes sense and maybe they should fix the rendering engine.  Graphics and word art behave similarly, rendering to certain dimensions and identifying uniquely within the document.  Table borders involve a 1 pixel dot or something slightly more complex but 1 pixel wide or high, stretched out.

In any case, OpenOffice.org could use PNG with alpha channel to generate pre-rendered fonts, describing to EyeOS exactly where in the rendered viewing pane (from 0,0 top-left corner down and over by coordinate) the letters fall and allowing it to place them.  A similar tactic would describe table borders and graphics, along with special cursors (i.e. passing the pointer over a table border to resize).  EyeOS would cache these images, and the Web browser would also cache these images as they get reused again and again.  Once the document closes, EyeOS purges the cached images.  In the mean time, the browser only downloads so much stuff (more generated when you change font sizes or zoom level), and mainly reuses the image.  Update time primarily relies on the blinking cursor.

Doing this in the OpenOffice.org back-end means that the back-end knows exactly what gets placed where, and can simply render to a different output format.  Unlike VNC or video compression, the screen does not get sectioned off and processed as an image; instead, the actual regional layout of the screen–the internal state of the rendering engine state just prior to outputting to a rasterized graphic–gets described as a regional layout of similar image elements (pre-rendered fonts, table borders).  Because of the large amount of repeated data (mostly rendered text), most of the screen gets re-used again and again, persistently, without re-examining the entire screen and having to dig out overlapping similar regions (tracking, kerning) and all kinds of garbage.

I believe this sort of rendering would allow for the embedding of the real OpenOffice.org (and other) application into EyeOS entirely through the use of Web standards such as CSS, XHTML, and Java Script.  I also believe this would use less server-side CPU and less bandwidth than VNC or X forwarding, due to higher accuracy and easier computation involved in reducing the volume of data sent, mainly because of low-level access to the rendering engine’s internals.  Nobody will do it of course, but it makes a good thought experiment nonetheless.  A proof of concept for drawing just arbitrary text (notepad) would amuse me but I don’t have the patience.

Baking bread is hard

•March 10, 2009 • 1 Comment

I’ve been trying to bake bread recently, and having little luck. I use King Arthur Flour, but moved from Unbleached White All-Purpose and Whole Wheat to Unbleached White Bread. Bread flour mixes better, rises better, and bakes better; but I’m still screwing something up, and not quite getting the rise I need. Problems occur with both sourdough starters and artificially cultured Red Star yeast.

The key to bread lies in exactly the same place as beer: patience. That and a measure of warmth. I was never giving the dough time to rise, and never in a warm enough place. A cold counter with an hour of time doesn’t double the bulk at all, and several hours only finds me with inactive yeast. Starting with cold water doesn’t help either; a warm oven doesn’t help much in that case.

Dough in a bread machine

Dough in a bread machine

I started using a bread machine to make the dough, but not cook it; while rising, the machine warms the bucket, facilitating rising even better than a warm oven. Leaving it for too long (i.e. all day) makes it melt and turn to useless, alcoholic mush though! So I have to actually be there, and be patient, while being lazy; one day I must learn to do this by hand, a task I outright fail at when I try.

Once the dough has risen in the bread machine, I put it back on dough cycle for a few seconds, and sprinkle in a little more flour. This thickens the dough, and makes it easier to handle. it also punches it down and kneads it into a ball, which I can throw into a loaf pan. Doing this loses me a little dough, but gives me warm dough that will rise in my oven; while the oven’s off the pilot lights keep it warm, but not warm enough to initially heat dough.

Allowed to rise to 1/2 inch below pans edge before baking

Allowed to rise to 1/2 inch below pan's edge before baking

Actually letting the dough rise above the edge of the pan produces excellent results. The loaf to the right was not allowed to rise flush; instead, it only reached about to half an inch below the edge. Baking accelerated fermentation and heated the trapped gas for the first few minutes, raising it to the edge of the pan. The result? A relatively flat, dense loaf. I use it for sandwiches but it does feel a little rough and heavy going down.

Loaf allowed to rise half inch above the pan

Loaf allowed to rise half inch above the pan

I allowed the loaf to the left to rise just a little bit above the edge of the pan, by maybe a half inch. This achieved splendid results; the bread crowned as it began to bake, and came out lighter and fluffier. Next time I may allow slightly more of a rise; eventually, though, the dough becomes soft and weak and falls apart. Good bread flour helps with this, along with the extra addition after the first rising and the additional kneading to form strong gluten chains.

Loaf back in the oven

Loaf back in the oven

For a special touch, I placed the loaf back in the oven for a few minutes to finish. When I removed it from the pan it still had a little to go, with a soft, moist bottom indicating a doughy interior. I placed the loaf back into the oven to bake a little longer. I also splashed the oven with water every few minutes during baking, more frequently after the oven preheated; a pan of water sat at the base the whole time as well. I also preheated the oven with the loaf in it, to take advantage of the extra rising period.

All in all, baking bread takes patience, time, and effort. I use a sourdough starter I made at home, along with cultured yeast like SAF-Instant to give extra rise. Eventually my sourdough starter will take off on its own; I may simply need more patience for this. The bread machine helps, both with warmth and with mixing the ingredients better; I don’t strictly need it, and hate the way stuff comes out when baked in it.

Keeping the Playstation 2 Alive

•August 7, 2008 • Leave a Comment

Have you ever thought about the forward push for a new marketing strategy?  The big jump in a product to something that’s the same, but totally different?  it doesn’t take the same input, be it fuel or programs or ink cartridge; but it does the same thing as its predecessor, maybe a little shinier.  It’s the strategy a business uses to stay relevant:  push new product.

That strategy fails once in a while.  Today’s buzz pointed to Windows XP still outselling much-hated Vista, but I seem to recall Playstation 2 still outselling much-taunted Playstation 3.  I think the Sony product makes a far more interesting focal point for the Push New Product Fallacy, the idea that a totally new product that pushes a totally new product line needs to arrive every so often and obsolete the old product for a company to survive.

First, let me explain what I see in Playstation 3.  I see the very valid and strategic need to compete with both the Shiny Upgraded Graphics factor of the Xbox 360, and the peer pressure from the Wii.  Old products don’t sell mostly because the new product garners more attention; Sony needed a new buzz, and thus came the birth of the Playstation 3.  Besides, at the time they wanted Blu-Ray to trample HD-DVD and needed an in for that market.

I don’t, however, see the need to kill the Playstation 2 as a revenue stream.  In fact, I would call Playstation 2 the biggest opportunity Sony has right now for recovering their poorly performing Playstation 3 revenue stream.  With a few upgrades, tweaks, and marketing maneuvers, the Playstation 2 could become a side source of revenue based on a large value add handled mainly in software.

First let’s discuss some very basic hardware.  I can get a 1GB Kingston MicroSD for $3.30, and 1GB USB flash drives go for as low as $6, including a USB storage chip and a NAND controller; all in all, I’d say the bulk cost for the controller and the NAND would have to fall under $3 in total, maybe less.  With DVD drives, a straight DVD reader costs $15 while a DVD+-RW drive with DVD+R DL and DVD-RAM costs $19, so maybe a $4 mark-up to upgrade that.  DVD encoder chip would cost around $10 maybe.

So, with a flash upgrade and a new DVD burner, the Playstation 2 could come up under $20 increased manufacture cost.  Package it with a $10 remote and you have a dedicated DVD player and recorder.  The device of course needs a firmware rewrite to supply timer functions, DVD recording, and a better user interface for a DVD player; but it does serve its purpose.  The system can even keep the rest of its hardware.

Given this strategy, Sony now has a basic $150 DVD player it can market– DVD players come for $50, but DVD recorders come for $100-$150.  It does play Playstation 1 and Playstation 2 games as well, so that gives it a good value add for a nice, long life cycle.  Sony could keep a good stream of third party developer revenue because the system is now a standard household appliance, packaged at the standard price, competing for that market but bringing a value add of satisfying an extra function; this means that the Playstation 2 would find its way into houses not even looking for a game system, and thus becomes an interesting platform for developers.

With just software, you could add a DVR function that dumps the encoded movie to a hard drive based library, provided the user brings a USB hard drive functioning on standard USB Mass Storage protocol (which is to say, any)– that puts it into the range of a $300-$2000 device, but the price in those comes from the size of the hard drive; the user can use any cheap hard drive here.  Congratulations, you now compete with TiVo with a one-time production cost for software.

Software again, along with a radio chip and a small ARM processor that costs $1 and has 2MB of embedded RAM, gives you 802.11n and whatever else by firmware update, and the ability to use SMB, sshfs, and DAAP for a network media center.  This competes with Xbox Media Center with, again, a one-time production cost for writing software.

So now we have $20 of hardware upgrades that raise the per-unit production cost, and some amount of initial research and development costs that eventually lose significance but give a massive value-add to the Playstation 2 as a household media appliance.  This means Sony can market the Playstation 2 to a new audience, and use the segment of that audience with young children to get their games in the door.  Your kid wants a game system so you buy the latest buzz word (Xbox 360)… or wait, the DVD player plays Playstation 2 games, you have a game system.

The Web Browser as a Milli-Application

•July 6, 2008 • Leave a Comment

I have considered several times designing applications in a similar manner to a microkernel:  portions of an application would run in separate processes, communicating over IPC, such that if one process dies it doesn’t affect the others directly.  Graceful fault handling becomes much simpler, and buggy chunks of the browser don’t become universal security hazards if segmented privilege-wise.

I found that the Epiphany browser uses much less memory than Firefox 3, which helps when you get stuck with a laptop with only 512M of RAM and today’s programs think they have a right to more than the 32MB Windows 98 ran on fine.  I really, really miss the Awesome Bar though, and figured someone should add it to Epiphany.  And why not as an extension even?

Zing!  Why not make everything an extension?!  History, an extension.  Bookmarks, an extension.  Tab browsing, an extension.  Tab browsing integration with History and Bookmarks, an extension depending on the previous three such that it won’t install if you don’t have them.  Password manager, an extension.  Encrypted password, an extension depending on the password manager.  Smaller/Larger buttons, an extension that later could get replaced with full page zoom.

In theory, each extension could use an individual process, communicating over IPC, functioning effectively as a micro-application.  Really, this helps immensely when we get into Flash and Java (a plug-in extension to run plug-ins?); but when we just want to add tabs and a history feature, we can load all that stuff as normal.  Rather than a micro-application, this would be a milli-application— or more basically, a modular application with a lot of modules.

Modular applications tend to encompass all desired functionality in the main application, and only give extensions or plug-ins for not typically anticipated enhancements, or for anticipated enhancements that follow a diverse and unpredictable level of extension (for example, sound codecs).  Most applications don’t separate out core functionality into smaller, independent code bodies that extend a simple core.  I believe this design, however, would clearly separate out important subsystems and allow for easy replacement with no surprises.  Or maybe I just like small bits.

Anyway, back to my original point:  someone needs to port the Awesome Bar to the Epiphany browser.

Financial planning

•June 30, 2008 • Leave a Comment

I feel the need to underscore the importance of good financial planning not really for anyone’s benefit but, frankly, because some people have routinely pissed me off criticizing my recent corrections to my own budgeting problems. I’m getting tired of it; it’s not enough to have nothing to prove, not when “I bought something” results in “OH MY GOD YOU’RE POOR BECAUSE YOU SPEND ALL YOUR MONEY LOLZ IDIOT” all the fucking time. So here’s a peek into my money life.

First off let’s talk about bad financial management. In the beginning of this year, I spent a good $4000 between January and late February. I decided to learn to play guitar, and basically trashed my cash buffers on a guitar, an amp, pickups, learning materials (books), and the like. That plus spending at MAGFest took me down a few notches nice and hard. At the time I had no job and still squeezed my funds down to nearly nothing!

So I started saving money. At first I tried a $400 budget for personal spending, which I routinely broke ($800, $900) due to improper spending and one planned large expense (a guitar). By May I had $2000 saved away, which went to a $1400 state tax back-pay. I had $500 on hand, my truck broke, and I had to get my parents to cosign for a car loan… and let them talk me into a new car instead of a good used vehicle. Still not getting it, but I’m doing a little better.

It was at this point I began seriously budgeting my money, looking at the charts and graphs my accounting program generates and examining the income reports for a more fine-grained view (I didn’t study accounting as an elective for nothing). I started at May, faltered a little, cleaned things up and continued into June. I set my new personal spending maximum to $250, but didn’t stop there; I added two more dynamics to my budgeting: net income planning and time planning.

“Net income planning” is just a fancy way of saying I need to save some money every month. In my case, I want $1000 a month of net income, and preferably $1000 of liquid assets (some of my money goes directly to investment funds, and I count that as if it’s already invested as soon as I have it). If I miss either of these, I examine the month to determine why; likewise, if my income or expenses change for a specific reason I adjust this goal to remain both realistic and prudent. When I get a raise, for example, I should increase my target saving.

“Time Planning” directly addresses mid- and long-term considerations. I want my car paid off at the end of the year, so I should have $10,000-$13,000 on hand by then to do so. I want to be able to plan my moving out starting January, with the realistic ability to do so at any time; my net income should allow for the cost of rent and utilities by January, and I should have a realistic amount of money for a buffer by February. I want to get a different car; more on this later, it’s complex.

So this all sounds well and good but what about actual performance? I stayed under $250 personal spending in May by $20 but pushed to $322 in June. July I opened with $100 on Guitar Hero and figured I’d stay under $150; computer broke, and it has no such budget, so in order to satisfy net income and time planning constraints I’m going to make a mental note to recover that from my personal spending over time. It seems I’ll just have to hold back on those new guitar pickups until after August to make sure it’s safe.

Okay, so I’m wobbling over the line on my own fault, and by sheer chance I’ve been thrust past my own restrictions and have to do a little work to get back on track. These things happen. I actually have the money to handle it; moreover, I can negate the effects entirely. As for the overspending… I’m about $100 off, and siphoning $160 to correct a small bump, so I’ll likely buy a $50 game or a couple audio CDs in the next 2 months and leave it at that.

I missed my net income goal, totaling about $750 instead of $1000. Blame a dentist visit and the year’s subscription to Amazon Prime. I’ve incorporated some ramen and some less expensive drinks for food, which should reduce my food costs by 60% max; I’ll likely get other food and keep that around 40%. It seems there were some one-time issues along with the overspending that just pulled this down; it’s still marked improvement, and I’ll do better this month.

My time planning is on shakey ground. I’m looking forward to the end of August; I’m being sold off by a contractor, to another contractor. During this time, my current employer takes a large chunk of my would-be pay as overhead; I should get a good raise when I move to an employee of the second level contractor, plus they have actual good medical benefits so I can save another $282/month. The numbers I’m estimating here are predictions and could fall higher or lower: $10/hr for 6 months, 26 weeks, my employer makes $10400. We shall see.

On this prediction, I should have my car fully paid off at the end of the year; really, I’ll have a large chunk of money that I can just kill the remainder off with. Also, with the level of income, I’m fully stable to move out; I will start a real apartment search in January, and save money for a good cash buffer. This leaves me one more goal to throw in: A new car.

I don’t want my car, I want a manual; in fact, I want a $30,000 Mustang with a lot of perks. I want it at $300/month, which means I need a trade-in and down payment combined somewhere around $15,000 at 6% interest, maybe at 10% $18000. This creates a complex decision: If I get the car it will set me back moving out by about 2 months at a minimum, or more if I hold out for a better down payment (and a better financial position because of smaller payments). if I move out first, I’ll be set back getting the car by exactly 2 months.

This decision is interesting because if I move out first, I’m set back 2 months getting the car; if I get the car first, I’ll also regenerate a cash buffer for moving out faster. In either case, a bigger down payment helps me by getting me smaller monthly payments; this sweetens both options. Waiting to move out means I save more money faster, and can quickly improve my long term financial situation; but this option inherently encourages me to get the car first, since at that point there’s a big enough buffer for me to seek an apartment immediately. Moving out first means I can wait longer for the car and not have to stay at home, which takes some of the pressure off to jump the gun on the car.

Now for the interesting part. Current projections indicate that my entire year’s personal spending, if kept to the budget, would set back either of these plans by about 2 weeks. For long term investment, the opportunity cost over 10 years is about $60,000, the opportunity cost of the car is similarly $60,000, but if we deduct the trade-in (assume $10,000, but I’ve considered down to $7,000) it’s only $40,000. Based on my general investment plans (when I have excessive saved up cash, invest it), I’ll have between $600,000 and $700,000 by the time I’m 30 (with luck and good investing I can possibly break a million, but it’s hard).

These projections don’t consider buying a house, and that’s expensive. For that I’m thinking I need to save up for 2-4 years, in an apartment, to make a 20% down payment. This gets me out of mortgage insurance and also reduces my interest and monthly payments. This in itself has opportunity costs, as I’d like to keep much of that liquid in my savings account rather than in strategic investments or a mutual fund; then again, I need to learn to strategically sell to extract the money when needed anyway (instead of just to get the money into a better investment). I’ve got time to think about it.

If you’ve actually read this far you might have noticed I’ve got some shakey predictions for where my money’s gonna be coming from. You should also take note that all my big-spending plans fall somewhere after those variables are known, instead of just launcing on the (possibly bad) assumption that I’ll have more money soon anyway. If my income predictions totally fall through, I’ll be set back a lot; my personal spending will set me back maybe 2 months per year in and of itself. On strict numbers that means I should reduce it to $75/mo; but, it’s at a comfort level for me, and in the long term is dismally insignificant.

Now people stop fucking bothering me with my finances every time I say I bought something. I know perfectly well where it’s at and what I’m doing and I’ve got everything under weekly examination and make predictions, projections, and analysis of how effective I’m managing my budget every month. I’ll definitely take notice if I’m spending too much, and I’ll figure out why and fix it; and it’s damn immaterial if I spend $50 on something I can get for $30 next year out of my controlled personal spending.

Automatic transmissions suck

•May 28, 2008 • Leave a Comment

I had a 1991 Nissan pickup truck with 230,000 miles on it, abused to hell, since I started driving.  Nice truck, but for the past couple years I’ve spent a lot of time under the hood trying to make it work right.

The car worked, but irritated me with some sort of engine issue.  When accelerating hard, it made a lot of noise and slowly gained speed; you’d think if you hit the gas and the engine made a lot of noise it’d go somewhere right?  I fixed a hole in the exhaust, dumped all kinds of crud in the engine, replaced fuel pressure regulators and fuel injectors and even added a high flow air filter, but to no avail.  Eventually the damn thing leaked a lot of oil and bent a connecting rod so that’s that.

New car!  2008 Chevrolet Cobolt LS with an automatic transmission.  I wanted a manual so I could shift the gears myself, but parents (cosigners) coerced me and a salesman (we’ll call him Dan) at Koons Chevrolet White Marsh said that the engine would peak out in 5th gear at highway speeds and burn a ton of gas, getting horrible mileage (Dan also said it had a 2.4L engine and anti-lock brakes, but we’ll ignore those two misrepresentations for now; I don’t like ABS anyway).  So now I have a car with an automatic transmission.

The car does the same damn thing.  It’s just as broken as my old truck and it’s still irritating the fuck out of me.  I went online and asked some car technicians, and the question came up:  is this an automatic or a manual transmission?  To be blunt, the verdict I got was that automatic transmissions are shit and lose a lot of power in the torque converter, especially when accelerating.  It’s not broken, it just drives like shit by design. And by the way, Chevrolet says the manual gets better mileage on the highway.

A torque converter uses a ball of goop to transfer engine power.  Think of it like cake batter:  when you turn high the mixer (engine) on the bowl (wheels) starts spinning slowly even though the mixer gets to full speed almost instantly; after several seconds the bowl gains a good bit of speed and settles at a fast rotation rate, but the batter’s still getting beat.  The mixer can keep spinning if you stop the bowl too, just like how you can hit the brakes and stop yet the engine stays at around 800RPM.

In a manual transmission this doesn’t happen; instead you’re driving the car.  You hit the gas, the throttle opens and the engine goes faster.  If the engine spins any faster, the transmission necessarily must spin faster or else gears snap, and the wheels must spin faster or else the axle snaps.  Of course it also stands to reason that if you hit the brakes fast, the engine must stop and thus stalls out; brakes and clutch at the same time, please.  You also actually have to shift gears with a manual.  On the plus side, you can use the clutch for stunts or, even better, to get better gas mileage and better control in snow by disengaging the engine from the transmission and removing the engine braking effect; and of course, better acceleration and fuel economy.

800 miles and I want to sell it, I don’t want it, it’s based on really bad design principles and feels like shit driving it (just like every other automatic transmission).  I really don’t want to take the financial hit but I’m getting rid of the car as soon as I can.  I’m not happy about the salesman lying to me either; the dealer sent me a customer satisfaction survey, in which my extra comments included a recommendation to “send him to Hell.”  Enough of this shit.