All posts by Jay Krivanek

I have been involved in radio in some form since 1993. I earned degrees in broadcast media and information systems. Also on my list is that I have had in technology that helps empower radio enthusiast in the industry I love. I come from more of the hobby spirit of radio rather then the commercial one. Despite my love for media I am an advocate of a much more diversified media industry then the one we have today. I tend to champion campaigns for local diversity as opposed to homogenized markets. I like long walks on the beach and dogs too.

Concepts not Syntax

A big part of the world of programming is understanding concepts. Learning to learn is something every programmer has done. How do you learn new concepts? What are your methods for improving efficiency and elevating your craft? You would think these would be the biggest questions in the hiring process. However sadly they aren’t.

This concept can be proven by seeing how a lot of programming interviews go. Asking a programmer to interview in the most alien way possible. No intellisense, no Google foo, no man pages. The worst interview tech question I ever had, “How would I learn to use a tool without the man page or Internet?” I don’t know, how would I compile a program without a compiler. We can all do great things with constraints but by taking away the tools that make doing what we do efficient what are we really learning about a candidate.

My last gig I was running a small team of Python programmers. I was in charge of interviewing and acquiring new talent for the tech side of our business. What were the things I interviewed for? I mostly looked for how the candidate worked and perceptually understood things. Problems would be presented to them as an expected input and expected output and a broken algorithm that needed to be repaired. Most of the company’s time was spent visiting problems like these. The algorithm had no unit tests. Did the candidate ask about unit tests? Did the candidate poke at the code to brute-force their way to a solution? A surprisingly high amount did. The process was unconstrained, they could search the internet all they wanted. The reason why? Because they wouldn’t find some quick Fizz Buzz solution by doing so.

The broken algorithm I wrote for the interview was a small one, all it did was look up a phone number by name and the data was just stored in a dictionary. The code was just a little 6 line Javascript file. No frameworks were utilized. If the candidate knew some how to conceptualize what the code was trying to do they should have no issue debugging and fixing it. In the time the candidate spent debugging that code, I learned everything I needed to know about the them. Even if they had never wrote a line of Javascript, did the candidate look up JS and learn what they needed? Frankly JS syntax was the least important aspect of the interview to me. Runtime errors can help you resolve many of those issues. The time spent working on the code was also not that important. However, if they spent 15 minutes and not really talking it out, not really searching for anything or explaining where they felt the problem was. Then they were probably not that great of a candidate. While you might know answers to interview questions by heart, if you can’t produce that skill in the real world, you aren’t likely to be successful.

Sadly, the interview world for tech isn’t getting any of this. Qualifications are based on knowing specific syntax or languages that is the company’s current flavor of the month. What happens when that language is out of vogue or a new concept comes around that is better then what they currently use? Will they keep requiring their tech talent to hold this outdated syntactical knowledge or will they hope that their talent would re-conceptualize the problem in the new language or new toolset?

Hire for concepts not for syntax.

E-Mail is and Always has Been Broken

I’ve been generating email on my own services for quite some time. 20 years ago sending email was easy. I could fire up sendmail and do something like:

sendmail -f [email protected] [email protected] <<EOF
Subject: Hello my friend
From: Steve Jobs <[email protected]>

I love you
EOF

Easy as pie. The problem with this ease is I could easily set that -f argument to any email address I like. On the other end this email would look legitimate so long as I didn’t abuse the receiving email provider with my IP. We know where this story goes. Spammer pretends to be CEO, asks finance to write a check to ABC company and finance made the blunder of actually trusting what they saw in an incoming email.

The 2000s are filled with poorly thought out solutions to these kinds of problems. Most of them either revolved around scanning messages for certain characteristics, phrases or expressions and then there were the IP reputation services. PGP as an attempt to secure message authorship, but required active participation from all receivers. While these ideas met limited success there were many casualties along the way. There are also the “double knock” type email systems where your intended party’s service will send an automated message back to you and ask you to verify you are human before your message will be released from a queue. Solutions like these cause all sorts of unintended side effects, especially if the email can never be seen by the intended party by perusing their spam folder.

Enter Sender Policy Framework, or SPF, is a way to use something authoritative like DNS to tell mail providers where they can expect mail to come from for a domain. An email from a domain without SPF available might seem sketchy to some email providers. Add an SPF policy and you get a gold star from the likes of Google, Outlook and other free e-mail providers.

v=spf1 ip4:127.0.0.1 ip4:127.0.0.2 include:mx.example.com include:mx2.example.com -all

What does this mean? v=spf1 is the version of SPF we are utilizing. As of 2021 this will always be v=spf1. Next in importance is the end of this line. -all tells other providers to FAIL any email that doesn’t conform to this policy. If you domain does not serve email at bare minimum you should set this record to:

v=spf1 -all

The above basically says, reject all email claiming to be from my domain. Optional elements of the SPF record are “a” “ip4” “ip6” “mx” “ptr” “exists” and “include”.

  • A – Tells providers that mail coming from the same source as the main A record for a domain should be accepted.
  • IP4 (followed by a dotted IP4 address) tells providers what IPs or Networks can send email under our domain
  • IP6 – is the IPv6 equivalent of the above.
  • MX – tells providers that mail coming from servers in our MX record are safe.
  • PTR – tells the provider that rDNS for a client is enough to allow email (this should be avoided)
  • EXISTS – uses SPF macros to match address
  • INCLUDE – includes the referenced domain’s SPF policy.
  • ALL – matches everything, this is usually why ALL is defined at the end of a record.

Next are the qualifiers:

  • + : Tells other providers that the rule is passing, lack of a qualifier defaults to this qualifier
  • : Tells other providers to fail on this rule.
  • ~: Soft fail, typically quarantine the message
  • ?: Neutral, is typically interpreted like no policy even exists.

Now that we know all of this we can interpret this line:

v=spf1 ip4:127.0.0.1 ip4:127.0.0.2 include:mx.example.com include:mx2.example.com -all

Version 1 Pass IPs 127.0.0.1 and 127.0.0.2 and include the SPF records from the domains mx.example.com and mx2.example.com, Reject everything else.

Now that we know SPF, problem solved right? We can now all implement SPF and never look at this problem again. Well, no, we need to apply a cryptographic signature to emails because how do we know that the email that we sent hasn’t been altered to look legitimate? Once it leaves my outbox, I trust many people with my email and any one of them could make an alteration to the email. I am sure you are familiar with the concept of a Man in the Middle attack. This is what Domain Keys were meant to solve.

Back in the mid aughts Yahoo was in the constantly frustrating position of being a spam target as such they saw a growing problem of manipulated emails. They brought out their own standard to resolve these issues. The problem was that not many people were aware of Domain Keys and other providers like Cisco were also trying to solve the issue of manipulated emails. They called their system Internet Identified Mail. Instead of competing to solve this problem they opted to join forces and create *drum roll* Domain Keys Identified Mail. We all know this today as DKIM, but this would solve the issue of mail authorship identity.

DKIM solves this by issuing a public key that is also placed on the authoritative DNS of our domain. Are you detecting a theme here? We also need to cryptographically hash our email at the source. Many use a tool called opendkim which is a filter placed in a mail server like postfix. A private key is used to generate a hash that is added to the header of our message. The header is labeled DKIM Signature. Inside this signature we define which header fields we want hashed, the “From” field is the only required header field. Along with the cryptographic algorithm in use. From Wikipedia:

DKIM-Signature: v=1; a=rsa-sha256; d=example.net; s=brisbane;
     c=relaxed/simple; q=dns/txt; [email protected];
     t=1117574938; x=1118006938; l=200;
     h=from:to:subject:date:keywords:keywords;
     z=From:[email protected]|To:[email protected]|
             Subject:demo=20run|Date:July=205,=202005=203:44:08=20PM=20-0700;
     bh=MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=;
     b=dzdVyOfAKCdLXdJOc9G2q8LoXSlEniSbav+yuU4zGeeruD00lszZVoG4ZHRNiYzR

The important elements of the above are version (in this case 1) , algorithm (in this case rsa-sha256) domain (example.net) selector (The DNS record holding the public key) bh and b are our body header and body hashes. The mail provider will then use the public key along with elements of the message to hash and compare the result. This TXT DNS entry will look something like this:

Name: brisbane._domainkey.example.net.
Value: v=DKIM1; k=rsa; t=s; p=<public key>

If the hashes match, we have a winner and it passes DKIM, if they are mismatched then it fails. The absence of DKIM is considered, as of 2021, neutral because not everyone has caught on to the importance of DKIM.

However, a new problem is coming. We have these great tools to help fight fraudulent email but setting all these things up is confusing and every mail provider could weight certain aspects of these tools more than others. On top of this, once our email messages are delivered to another mail provider we really have no idea what happened to that message. Was it delivered? Did our broken DKIM implementation block it? What if Kevin in support decided to setup a newsletter service for his customers and neglected to tell IT that they need to add a new provider to the SPF record. He is happily toiling away building these newsletters that are never seen because they all go straight to spam or are outright blocked and IT is unaware because who would ever report this issue.

Enter Domain-based Message Authentication, Reporting and Conformance, boy that was a mouth full. DMARC was created to solve this issue. We setup a new, oh you guessed it, DNS record to tell email providers what to do with our broken email systems. On top of this we also define a reporting email to send helpful reports about the activity of email carrying our domain. This is an important point because any number of email systems could be claiming to be an SMTP provider of our domain. Some of these systems are legitimate some are not. Using a DMARC policy we can tell other email providers what to do with email that doesn’t pass DKIM or SPF. We also can get daily reports on what is happening with emails claiming to be from our domain.

v=DMARC1;p=quarantine;sp=reject;pct=100;rua=mailto:[email protected];

v Defines the version of DMARC we are using, p is the policy, sp is the subdomain policy pct is the amount of emails to apply the policy to and finally rua is the address to send reports to.

Generally you want to be fairly permissive with DMARC while you figure things out. I recommend utilizing the quarantine policy until you have things working as they should, then move to reject. You can then examine your DMARC reports and determine if things are working as they should. Once you are certain that the systems that should work do, move to a reject policy.

What about these DMARC reports, what do they say exactly? Well let’s look…

<policy_published>
  <domain>example.com</domain>
  <adkim>r</adkim>
  <aspf>r</aspf>
  <p>quarantine</p>
  <sp>quarantine</sp>
  <pct>100</pct>
</policy_published>

This block tells us what the mail provider detected as our current policy. This policy is defining a relaxed posture towards dkim and spf issues. Additionally when a problem is detected we are asking them to only quarantine messages which means they typically go to the spam folder.

adkim and aspf both default to a relaxed posture “r”. You can define a strict one by setting your DMARC option adkim=s;

Next you will find a record…

<record>
  <row>
    <source_ip>127.0.0.1</source_ip>
    <count>36</count>
    <policy_evaluated>
      <disposition>none</disposition>
      <dkim>pass</dkim>
      <spf>pass</spf>
    </policy_evaluated>
  </row>
  <identifiers>
    <header_from>example.com</header_from>
  </identifiers>
  <auth_results>
    <dkim>
      <domain>example.com</domain>
      <result>pass</result>
      <selector>mail</selector>
    </dkim>
    <spf>
      <domain>example.com</domain>
      <result>pass</result>
    </spf>
  </auth_results>
</record>

Seeing this record tells us that 127.0.0.1 is a good source for our domain and has successfully sent 36 emails to this specific provider. DKIM and SPF pass and all mail has made it to their intended parties. Note: that emails could still be filing as spam for not meeting other heuristics of the provider or being marked as spam by individuals. So what happens with problem sources?

<record>
  <row>
    <source_ip>192.168.1.1</source_ip>
    <count>5</count>
    <policy_evaluated>
      <disposition>quarantine</disposition>
      <dkim>fail</dkim>
      <spf>fail</spf>
    </policy_evaluated>
  </row>
  <identifiers>
    <header_from>example.com</header_from>
  </identifiers>
  <auth_results>
    <spf>
      <domain>example.com</domain>
      <result>softfail</result>
    </spf>
  </auth_results>
</record>

This tells us that a server located at 192.168.1.1 tried to send 5 messages as us to this provider. Since they were not in our SPF and likely could not have provided a valid DKIM hashed email they failed and according to our DMARC policy were properly quarantined.

We setup all three and now we can continue sending automated email and ride off into the sunset. Right? Well no, automated email can easily break DKIM. Even if you setup everything perfectly, if you do not know how to format a message to conform to SMTP limits a message can be altered in transit, breaking DKIM signatures. See, still broken. SMTP has limits and unfortunately a lot of tools have been placed over the top of this unruly protocol. First and foremost you must ensure that your systems are using the correct sender. Use -f to set the appropriate email address that contains our real domain, don’t just let the system set whatever it likes because it will just use the current user @ hostname as the author. Even if you set a “From” field in the body of your email, the “-f” flag is critically important in identifying who the real author should be. I know this sound counter-intuitive but “From” “-f” “Return-Path” “Reply-To” all do different things.

Additionally SMTP has line length limits. It’s easy for the body of a message to contain log lines or html content that could hit these limits. The general rule I follow is no line longer than 900 characters. As a general rule you can opt to Encode the message to protect it from these limits. Encode protocols that I quickly reach for are:

Content-Transfer-Encode: base64
# or
Content-Transfer-Encode: quoted-printable

Wait, why does this matter, you probably thought I was talking about troublesome concepts like DKIM or DMARC. It matters because if you fail to limit line length some SMTP server along the chain are going to force the change for you. When that enforcement occurs it will break the DKIM hash because now some other SMTP server is going to enforce a line length restriction which will alter the original message. In the old days this alteration would have gone unnoticed but with cryptographic integrity on the line, it will be noticed.

Base64 encoding is probably the easiest to understand, all we need to do is encode the material and split it.

cat file.log | base64 | fold -w 76

Now you just need to set a Content-Transfer-Encode header in the email along with the original Content-Type.

Another option is to use Quoted Printable, this is typically used for HTML emails so I won’t go into examples but I can tell you how it works. You will want to fold all lines to be no longer than 76 chars and you will need equal (=) encode lines that have been wrapped. Any = characters found in the source document will also need to be encoded to =3D

An example:

Content-Transfer-Encoding: quoted-printable

<html><head></head><body style=3D"font-family:Arial;"><table align=3D"cen=
ter" border=3D"0" cellspacing=3D"0" cellpadding=3D"0" width=3D"60%" bgcolor=
3D"#FFFFFF" style=3D"background-color:#FFFFFF;table-layout:fixed;-webkit-te=
xt-size-adjust:100%;mso-table-rspace:0pt;mso-tablelspace:0pt;-ms-text-size-=
adjust:100%;min-width:500px"><tr><td><table align=3D"center" border=3D"0" c=
ellspacing=3D"0" cellpadding=3D"0" width=3D"100%" bgcolor=3D"#EDF0F3" style
...

Basically = allows you to define any character but you must use this escaping policy for any non-printable ASCII characters. You can read more in the wiki article linked below.

Hopefully these hints will help someone who runs into these issues in the future. Some resources that will help you in your journey.

  1. https://en.wikipedia.org/wiki/Sender_Policy_Framework
  2. https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail
  3. https://en.wikipedia.org/wiki/DMARC
  4. https://en.wikipedia.org/wiki/Quoted-printable
  5. https://en.wikipedia.org/wiki/Base64

Why streaming media servers can be better.

I have largely been in the Streaming Media business for well over 12 years and in that time, there is one major problem with adoption.  It’s damn difficult.  Not really difficult from the perspective of a seasoned internet user, or someone who has been in the Internet world and is very familiar with the concept of media players, browser plugins and all the other bile that comes from the tech elite.  However, it is difficult with respect of how media is consumed by most non-tech people.

Think about the concept of a radio.  How hard is a transistor radio to use.  Find a power button, rotate a dial and presto-chango, radio.  However, if I try to apply this same logic to most internet radio.  Well first and foremost I have to go somewhere, say a website, then two different things can be presented.  Either I get presented with the fact that I lack a plugin called “Flash” to play the media or some other embedded plugin (Anyone remember Windows media player?), I may also get a link to a playlist file that won’t necessarily work with the given applications.

I truly can’t believe that I have watched an industry I have worked for over 12 years in still accept this strange concept.  The game changer is finally right around the corner.  HTML5 is going to finally integrate the concept of streaming into the browser.  Great!  Problem is that no one can agree on exactly how to do this.  Sure we have the <audio> or <video> tag now, but there is no consistent codec out there that covers every different type of browser that exists.  Open source players can’t get on board with proprietary codecs like Mp3 or aacPlus, and closed browsers can’t get on board with Ogg.

The solution is to present both types and let the browser figure it out, however, legacy browsers might no even know what a <audio> tag is.  Harkening back to the concept many IT people dread, upgrades.  So many old world IT divisions still need IE7 or 8 which is simply god-awful.

There is promise though, a new codec by the name of Opus was just completed last Summer, I still patiently wait to see if it truly sees universal adoption.  The only hold out I see is from Apple.  Since, as they claim, there could be patent issues with a lot of Ogg’s legacy visions like Theora and Vorbis.  Will Opus stand up to the test of patents?  We won’t know for sometime, but Microsoft is on board and if they integrate it in IE 11 then perhaps we will see a day when Apple will simply be forced by major forces to accept Opus.

For now my plan is to make it easier on the broadcaster, because, to put it plainly, it shouldn’t be this hard to build a streaming radio station.  I can’t fix some aspects of it, but I think the days of plugins and players is over and reducing that unknown is certainly a good place.  My project Steamcast, will simply remove the need to even worry about a player by taking advantage of what is available to make the listening experience as painless as possible, at the same time, removing the need for a broadcaster having to worry about which player or browser a listener might have.

What Facebook’s IPO may mean for future web investment.

So now that the hype of Facebook as a major web investment has passed and we are all carrying on with life.  The tech community and wall street are in a stand off on the value of one of the highest valued web properties in the history of the web.  On one hand you have an innovative web property with over 900 million users.  On the other hand it’s just an ad-revenue based website that can’t quite monetize the mobile market.

For full disclosure, I am not particularly a fan of Facebook or sites like them.  I think too many users freely give information to these sites with the expectation that only their approved friends will see them without thinking about the impact of Facebook’s position in the relationship.  Many users of Facebook agree with me and simply use it as a tool to connect to their more fanatically Facebook oriented friends.  I will try my best to be level about my judgement of this situation and then give my opinion at the end.

Facebook has caught a wave of luck and timing.  First to be able to build a once isolated social website for University students and with a clean and simple user interface that people just got.  Once they closed down the university student requirement, their existing user base did an excellent job of pulling in others to the site.  There are very important things that I feel Facebook understands.  One is that popularity of the user base is important to Facebook’s success and the user interface.  No one really cares what goes on outside of that.  As long as it is intuitive and pleasant as a web experience, people will keep coming back.  These eyeballs have value for Facebook.  However, I think that what has plagued many marketing companies is the dollar amount for that value.  After all, an ad view or click is only worth anything if a user takes action and buys a product or service.

Facebook’s IPO valued it at over $100 billion.  That’s a lot of money for a website that just last year only made $1 billion.  This has created a P/E of 99.  That is astronomical.  For my less financially savvy friends, that is the Price vs Earnings index.  You typically want this value as low as possible.  Google’s P/E is 18 at the time of this writing.  Kraft Foods, which is a very strong company, has a P/E of 19 at the time of this writing.  So we could look at the P/E as a determiner to how expensive a stock really is.  Sure you see the stock value at $30 and think, wow that is affordable, but the stock only reflects a certain percentage of ownership.

Since Zuckerburg is holding control of Facebook with 57% of the voting rights that value is extremely low.  There are somewhere around 2.14 billion shares of Facebook.  That’s really high.  That means that just one share is just 1/2.14 billionths of the company.  Just for reference there are only 326 million shares of Google, which is why that stock’s face value is so high.  So when you buy one share of KFT or GOOG over FB, you actually buy a bigger share of the overall pool of stock available for that company.  So now that you understand the value of stock, you might be thinking that Facebook, according to it’s P/E, is only worth a quarter or a fifth of it’s IPO value.  This is probably a wiser line of thinking for a long term investor.

If Facebook’s revenue doesn’t improve to show value to the institutional investor, the stock price might go down as far as $7 per share before it starts to finally plateau.  I personally don’t think that will happen given that it is a new market and people love the idea of Facebook.  I would guess that the value would level off more in the low $20.  A lot of people want to put blame on the NASDAQ for it’s failures the morning of the IPO but I personally think the stock would have had the same losses regardless.  The realities of the value and the IPO deal make it clear that this was an exit strategy for pre-IPO investors.

I do think that Facebook can monetize it’s user base better.  By utilizing smarter advertising targeting and capturing value on viral activity within their community.  They have attempted to capture buzz words in status updates in the past, like if a user is talking about Coke to hyperlink Coke to an ad or like page for Coke.  Utilizing strengths in viral marketing are largely the next step.  However after being in this business for a while I understand the challenges.  You have institutional marketing that tends to be slower to understand the changing landscape of internet media and how to monetize it.  Getting traditional marketers to understand what they are leveraging is a challenge.  There is also the new ability of interactivity.  The other challenge is the end user.  People can get tired of ads and those ads present a stopping block to usage.  Worse when the targeting is done incorrectly and a user gets an ad that either is irrelevant or goes against their morals or ethics and concerns them.  Users are fickle and Facebook is not hard to duplicate.  We all remember MySpace right?  That loss of connection with what was important to the end user was devastating to MySpace.

Having said that one might think that Facebook is in a strong position but in actuality they have a lot to lose at this point.  This is a concern to investors.  Who wants to be Rupert Murdoch holding a property that you paid over $500 million for only to sell it a few years later for $35 million?  So Facebook has to shake off the view that it is a fad or that it can be easily run aground by a better competitor.  Facebook’s Facebook if you will.

There is also the problem of monetizing mobile platforms.  Facebook has largely left this untouched and for good reason.  The user experience.  A clean and understandable interface is key to Facebook’s growth and mobile ads tend to not align well with clean nor friendly.  Many mobile ads attempt to illicit and entrap.  This is the next big challenge.  Getting old internet marketeers to not utilize ads to trick or illicit an action.  While it may dupe naive users to go to  your page and even purchase a product or service.  It seriously degrades the perceived value.  Walking the fine line of intelligent advertising and keeping the experience exciting for the user is one of Facebook’s biggest challenges going forward.

For full disclosure I am not a stock analyst nor do I own stock or property in Facebook or it’s competitors.  I have only 2 years of experience in heavy trading in the stock market.  So take what I say with a grain of salt.

What’s Next?

So as I have been working on building and evolving several of my side projects into something that is somewhat more cohesive and organized.  I have started to think about what is next in online media or even just internet technology.  Since 1998 I have known the power of the internet as a tool to bring back influence to the consumer.  If you think about the way traditional media has worked, you will find that largely they were the single source of information.  We relied on Newspapers, Radio and Television stations to tell us what is going on in our world and we expected that it would be relevant.

As society has grown something unconsciously started occurring.  Since so much was happening everywhere it was difficult for traditional media outlets to, digest it first, then put it out in a meaningful way.  Organizations got larger to handle all of the information and through media ownership reforms we had consolidation in the environment.  This new environment has created a difficult situation.  Now suddenly what used to be news, is no longer that relevant.  It’s not about a traffic accident down the road from your house.  It is now more about how the president handles peace talks.  Or how a government halfway around the world is doing business in your own local government.

This has left a void in communities within large markets.  I am sure that peace talks are important to me, but are they as important as the fact that 5 of my neighbors down the road were burglarized?  How about the fact that a local shop owner is having a fire sale and it just so happens to be a store you visit all the time?  This information never makes it to you because it’s relevance by world standards is low.  Traditionally, media had to dedicate an employee to write up the story.  An editor/producer had to determine it’s relevance.  Then time was spent distributing this information.  At the end of the day, it was left up to the consumer of the program or newspaper to filter out what matters to them most.

The power of the Internet and computing allows us to let sophisticated algorithms do the work for us.  They can do this work based on our preferences.  If you have ever talked with anyone in advertising, they will tell you the most important thing in their business is understanding behavior.  This has been true of media for decades and true for business for centuries.  So what is happening now, is a transition to allowing the internet to track and determine preference based on behavior.  Your cell phone is the first step to allowing this.  In a few years time, I imagine that the next big thing is a basic tracking setup on your phone.  Some might argue that this already occurs, but I imagine that this tracking will be hidden under the guise of something cool or trendy for young people.

Companies are already hard at work trying to feel around how invasive they can be.  You have several apps that the user can intentionally allow location based information to be sent.  Some are disguised as games and others are marketed as beneficial because it allows your friends to know when you are nearby or where you are located.  This is why privacy is becoming more and more of a concern but that is another article.

The next evolution won’t require as much investment in time as I see it.  The next evolution has already begun.  Web 3.0 as some have called it.  You might find some overly technical explanation for it on the web but basically it will be about the silent web presence.  How the web is going to be a part of the physical world and how it will interact with you in that world.  10 years ago it was talking refrigerators, however, I think that is a little overboard.  I imagine a world that reacts and knows you and mirrors you.  A dating experience involving going to a local hangout and having a device tell you that a potential mate is there as well.  A social experience where behaviors are shaped by the way your preferences cross with other individuals.

Your smart phone won’t only know that you need something to do tonight, but it will know where you should go to have the highest potential for meeting new people because, based on past behaviors, it calculates that you might need new friends.  It might remind you to lock your door because not only did you forget, but there were 5 burglaries down the street from you.

These ideas are both scary and compelling.  However, it has already happened with intangible properties like music.  Could you see yourself going back to the old model of actually having to go to a store to not only find a good artist to listen to, but to also purchase your music?  For most, I think not.

Return from respite

So yes, textclad went away for a while.  Not certain that it is back for good, but it has a new home.  I have effectively set up a cloud solution and finally after almost 2 years I am at a point where it really makes sense to self colo.  75% of the way there.  I am also now responsible for the direction of a new company so we shall see how that all turns out.  My first goal was to slash costs and I think I have done that.  I am also steering things towards a more global market.

The original focus of the company was on game tools.  However that is a tough market and many of the tools fall in to open source.  So direction and winds are driving me to look for new markets to penetrate.

The Self Colo Project

So using a pretty fast internet fiber connection has it’s benefits and I am investing in some of that.  Some of my projects will be getting some much needed overhauls as I bring much of the backend in house.  I have amassed the capability to store up to 5TB of data in a closet utilizing a RAID array and SAN system as well as a barebones system with really impressive hardware all for the benefit of building virtual machines and shutting down my reliance on some external systems and providing some room for growth.  Wish me well.  😉

Mac OS X pokes fun at old Windows “Blue Screen of Death”.

Windows Share Get Info Box.I found it interesting today that while I was setting up some shares on my Mac Book Pro I found an interesting icon for my windows machine in the mac OS.  Apple seems to have a grim view of MS even though it hasn’t used the type of BSOD screen used in the icon since Windows ME.  Makes me wonder if Apple Shares or other Apple devices like the iPod will or should have the sad looking mac man as their icon on your Windows desktop.sadmacman

The Mouse just got fatter.

disneymarvelThe news is out, Disney is going to acquire more of our childhood with the acquisition of Marvel.  The deal is estimated to be a $4 billion deal.  Shareholders of Marvel are happy with that price but fans will unlikely be happy with the price they would have to pay.  A lot of superheros are in trouble now as they get a new boss.  There is already concerns that Disney could choose to water down the brand and remove the “edginess” associated with our favorite Marvel characters.

The reverse piggy bank

You know the imaPiggy_bank_face_d_25821artwge.  If you have been a kid in the last century it is very likely that you were introduced to saving by using the piggy bank.  The concept is that if you take loose change and put it somewhere that you can’t see then you will slowly but surely amass a fortune.  Something that I have seen for the past decade can be only described as the reverse piggy bank.  The concept a credit isn’t new.  However, some people do not understand the power for credit.  Credit can be best described as a reverse piggy bank.

If you open a credit card and carry a balance you will see this effect with interest charged by your bank.  The effect of the interest earned by a credit card can be relatively small by comparison to the debt carried.  Many do not understand that their minimum payments do not even account for 90% of their overall finance charges in some situation and worse in others depending on how high your APR is.  The other nice hidden secret of the credit card account is the minimum finance charge.  If only we could have a minimum interest rate on our savings accounts.  These interest charges can add up.

On the topic of credit it is also important to understand that credit is not a necessity.  We have been conditioned to believe that life cannot work without a FICO score.  On the contrary if you choose a savings based financial strategy and choose to save now and buy later you can avoid relying on these entities for the things you want in life.  Even a mortgage can be gotten without a FICO score as long as you meet a few traditional requirements such as job history and steady income.  The idea comes from the concept of self reliance.  Building up an emergency fund to fallback on when you would have traditionally used a credit card.

Things don’t happen on their own.  When someone tells me that they have no time to do the things they classify as high priorities such as spending time with kids and family, or taking time for themselves.  I tell them that they just don’t make the time.  In the same idea, you have to make room in your budget for savings.  Even with mounting debt it is important to have money saved.  This will keep you from having to use the credit card again in the event of an emergency.  Saving has to be a higher priority in certain circumstances then paying debt down.  Once you have a satisfactory level of savings saved then paying down debt can become a priority.

Some financial counselors recommend a savings of $1,000.  My personal test is the necessity failure test.  Take an item that you rely on to be there and to never fail.  A car is a good example.  Take the average cost of a pretty costly repair, such as transmission rebuilds or major engine work.  Use that as the goal of your emergency fund in the beginning of tackling debt.  If you consider the cost of some moderate auto work to be about $1,500 then that gives you an idea of how much you should have saved.  Everybody’s savings goals will be different and they should be.  There is no real good rule to follow other then the one based on your own finances.

Collecting money and saving is what will give you the things you want plus the satisfaction of knowing that it is all yours and no one else can lay claim to it, but even if you have money you still have to be smart with spending and choose toys, needs, and wants wisely.  Even a million dollars can be spent in the blink of an eye when it could have provided you true independence instead of a quick scratch of an itch that can wait.