![]() |
| Happy Birthday Big Guy |
![]() |
| Biker Gang |
![]() |
| Happy Birthday Big Guy |
![]() |
| Biker Gang |
![]() |
| So Good, and So Cheap |
The city itself is BEAUTIFUL. With many sites to see, like the Charles Bridge, the Dancing House, and the Prague Castle (there is also an old castle in the southwest side of the city with a beautiful view of the whole city), I am absolutely certain I would go back to Prague if I have the chance. But, by Saturday morning the fun in Prague had to come to an end, and we hopped on a bus headed for ….
Day 1 went to a lot of rest and little sightseeing. But Day 2 was full of wonders. It is best to see the city on foot, with many cathedrals, Volksgarten, the Imperial Palace, museums, and an amazing military site. I have no expertise on the night life of Vienna as I just relaxed and went to a couple of local bars. One of the highlights of the trip included a concert/opera on the compositions of Mozart and Strauss. Listening to classical music is one thing, but watching the performance is simply breathtaking. I never thought an opera would be much enjoyment for a person like myself, but I literally felt chills when the voices of the performers echoed through the concert hall. Monday morning, I had come to the conclusion that Vienna had more historic sights to see, but Prague was a natural beauty on its own. Now, it was time to board the train and head for the last destination on the first half of the trip, with the destination of…
I’m currently in Week 7 of my epic semester abroad in Paris. What have I learned while here so far? Knowing French would be most beneficial. Now, I haven’t had much trouble getting around or communicating (Luckily, other students were prepared and have been my personal translators), but still, learn the native language to a country you are planning to spend half a year in. Also, being super paranoid about getting mugged or pick-pocketed can lead to self-fulfilling prophecy. Losing my iPhone 4 in the 3rd day here was like losing a child. Hell, it was worse than losing a child because a child does not capture video in 720p resolution and have retina display. now i’m stuck with a whack-berry with a weird french keyboard that does not MAKE SENSE WITH THE REST OF THE WORLD. Those are very important lessons to have learned in the first 7 weeks in a different, foreign country.
Now, as this being my first post on my awesome blog, I had to think long and hard (That’s what she said) about what I want the world to know. As i’m thinking, I’m drinking an ice cold Coca-Cola, eating barbeque potato chips, and watching SportCenter via SlingMedia, which allows me to watch cable subscription TV from my home in America anywhere I have an internet connection. That is a true gift from God. It seems no matter where you are in the world, “home is where the heart is”, and I’m doing everything to keep my home with me. This brings me to last night’s events and the topic to the first Imaginarium of Mr. Berani:
![]() |
| Washington looks to his bullpen after C.J. Wilson pitches a brilliant 8.3 Innings. The bullpen fails, miserably. |
ALCS Game 1 occurred last night and like everyone predicted … The Rangers lead 5-0 going into the 8th?! I was thoroughly thrilled, not because I’m a Rangers fan. I will not admit to this because I am not a bandwagon and was not there when they were at their worst. Although, I have been with the Cubs since I could breathe the word sports and have more to complain about, but that’s a whole other story. I was thrilled because of the Yankees. They are the empire. And like all feel good stories, the evil empire must be brought down. The Rangers have a chance at this, a good chance. Doesn’t hurt that I’m currently from Dallas and Dallas fans need something to root for as the Cowboys completely crush all hearts deep in Texas.
The game started approximately 7pm CST. With a little math, they equates to 2am Paris time. At 2am on a Friday night/Saturday morning, I stayed in my tiny room and watched game 1 of the ALCS. Sadly, I passed out right before the 6th inning and would have to wait until the morning to learn of the outcome. A true choke job by the Rangers. C.J. Wilson had done a spectacular job keeping the heavy bats of the Yankees at bay, with some great defense from the outfield. When Ron Washington decided to take out C.J., which he did at the right time, he went to the bullpen he thought would succeed. Sorry, Ron better luck next time. I’m disappointed because the Rangers do deserve a chance at the World Series, especially in a year they won 90 games. Today, game 2 will be played and is nearly a game win situation. No one wants to go to New York down 0-2 in the series.
Side Note: Lincecum vs Halladay tonight = a pitching match-up no true baseball fan should miss.
So, as my first post comes to an end, There are a couple of items for tonight that need to be carefully considered. Texas looks to rebound in NCAAF against 5th ranked Nebraska. We completely shattered Nebraska’s dream last year by kicking a last second field goal to win the Big 12 and go to the National Championship. This year we are much weaker and I’m sure they want to murder us. The two LCS matches will be must watch for sports fans. Yes, I’m in one of the most, if not the most romantic, cultural, straigh up partying capitals of the world, and I’m talking about US sporting events for Saturday, October 16.
I guess home really is where the heart is. Let’s hope the Rangers can bring some of that heart to their home.
I wrote my first real Scheme macro today. It was a macro to emulate ML’s pattern-matching.
Writing a type-checker in Scheme, I found myself writing all these list?, eq?, and let* expressions, making my code look way too complicated than necessary. All of that can be done simply and concisely with pattern-matching. For Lisp hackers out there that aren’t familiar with pattern-matching, it’s basically a destructuring-bind where you can have constants or symbols in your binding expression which are compared with the value you are binding to, sort of like a cond where the predicates are implicitly specified in the pattern.
The great thing about pattern-matching is that code to process data looks just like the data you are processing. So it makes it extremely easy to look at code that uses pattern-matching and see what it’s doing, as opposed to equivalent code that uses explicit conditionals and lets.
Without pattern-matching:
With:
Anyway, when I first started using Scheme, I didn’t like the prefix notation and the paren-syntax — or lack of syntax. Compiler writers out there, you know what I mean. 😉 Frankly, I don’t know anyone who does like it at first. I thought, I’d rather write code in a more familiar way, then switch to prefixed constructors when I needed to create code like I would in ML. For example, the SML/NJ compiler exposes the abstract syntax as a datatype, so you can do just this.
However, this means that there are 2 representations of code in the language. Two. …And that doesn’t seem right.
After working with macros (I finally had the conceptual break-through of figuring out how to use double backquotes — ick!), I realized, this makes sense. The idea that code and data have the exact same concrete representation makes total sense. So much so that I believe this is the right way to do it.
But one question that comes to mind is, why does that one representation have to be prefix notation with parentheses everywhere?
It’s not clear whether being able to write data in the same form as the regular parsed grammar (infixes and all) would be a good thing or not.
One thing I’ve since learned, but had trouble with when I first switched from ML to Scheme, was that in Scheme, the meaning of parentheses is overloaded. In ML, parentheses are used for disambiguation of parsing, and that’s all. This is almost strictly so in OCaml. But in Scheme, not only is it used for specifying the parse in lists and sub-lists in quoted expressions, but also for function application. This confused the hell out of me for a little while, and my first inclination was to avoid using quote (i.e. ') and use list instead. But I soon got over that.
Overall, my experience learning Scheme has been extremely rewarding, as a great way to recognize patterns in something is to experience something completely different from it. And many decisions that were made in designing ML, the opposite choices were taken for Scheme or Lisp.
I went to school at Carnegie Mellon. And there, the computer science department is big on research. For system-level programming, they use C. But for almost all theoretical or type related topics, they use ML. In particular, Standard ML.
So since I was really interested in compilers and type-theory, I became very familiar with ML. First how to use it. Then how to use it to make interpreters. How compilers work. And eventually, how to compile ML code. A relative expert in ML.
While at CMU, I was thoroughly trained in the benefits of strongly typed languages, the pitfalls of weakly typed languages, and why static typing can result in more efficient code than dynamic typing. I was also introduced to the idea of typed intermediate languages — that compilers have multiple phases which translate code to entirely different languages, each of which is strongly typed, getting closer and closer to the target language after each phase. In other words SourceLang => IL1 => IL2 => … => ILn => TargetLang. And after I got the hang of it, I thought ML was great! Oh, how I became to loath writing code in other languages. Look! Look how easy and beautiful the code would be in ML.
But recently, for the first time, I’ve had a real reason to write some code in Scheme. Scheme is similar to ML in that it’s functional. But it’s dynamically typed. Moreover, some flavors have “features” in them like dynamic scope that make it very difficult to look at a piece of code and determine whether it will compute gracefully or result in an error. One of the biggest benefits of static typing is that it reveals errors in your programs as early as possible — at compile time. Dynamic typing on the other hand, reveals errors as late as possible. If a branch of code is never taken, you’ll never know whether that piece of code will fail, possibly until you ship your code and your users break it, losing millions (even lives) in the process.
So all through school, I was on one side. I was very very close with ML. But now that I’ve been using Scheme (and also toying with Qi [pdf]), I’ve been on the other side. And now, for the first time ever, I can judge ML for what it truly is. And here’s what I’ve found.
I still think ML’s a great language. Early bug detection and the invariants that are captured in types are so utterly essential to writing correct code that I can’t believe it’s still being done the other way. I literally can’t go writing more than a couple Scheme functions before I have to write down the types in comments, because otherwise, I have to try to hold all this stuff in my head. It’s not something that is in addition to writing the code; types are something I implicitly think about when I create code. When I look at a piece of code, to see if it makes sense I type-check it in my head, in the same way that I execute it in my head when debugging for example.
However, I’ve realized there are a few very powerful abstractions that are missing from ML (or lacking).
…Maybe I’ll go into the details later.
As an after-thought to yesterday’s post, I remembered Kyle (another person who prefers Emacs over Eclipse) showing me to double loop iterator variable names as in ii instead of i or jj instead of j. This allows you to search for loop iterator variables without results coming up everywhere the letter “i” is used in your file.
What he was basically suggesting was that I change the code I write in order to accommodate the inadequacies of a tool, namely, textual searching.
This does not make sense to me at all. If I had a tool that understood the scoping rules of my language (e.g. Eclipse editing Java), I could select a variable and the tool would show me all the references to that variable, regardless of its name. Granted, this doesn’t help if you’re searching for all the loops that use the variable name, but (again, maybe this is just me) this is not something I ever do. And if I needed to, a regex would handle that (or find whole word).
Changing your work, no matter how slight, to fit your tools is an indicator that you need a better tool. Because basically, you’ve found a pattern. And whenever you find a pattern, you’ve found an opportunity for improvement. Changing your work to fit your tools is equivalent to optimization. You should only do it if you have to. In other words, if it is a bottleneck. And there’s two ways to optimize for a bottleneck: from the top down, or from the bottom up. Doing it from the top down means you must always think about it. Doing it from the bottom up, abstracting the optimization away in a tool, means you don’t have to think about it anymore. It is equivalent to pushing a process down to a lower level in the hierarchy. And the top level is the only level you have to think about.
…This brings up an interesting question. What is above me in the hierarchy? as it would be silly to think I was at the very top.
I was talking with Kyle the other day about how with programming languages, when you look up at languages more powerful than the ones you know, all you see is what you already know and the supposedly more powerful languages seem not so great. However, when you look down at less powerful languages, it’s obvious that they are less powerful. I’m not talking about computational power, as all Turing-complete languages can express all the same programs. It’s more about abstractions. Paul Graham describes this more in depth calling it the Blub Paradox.
That is all very well and interesting in itself. But if you accept it, you must also accept its implications. Namely, it makes sense to learn and use more powerful languages.
But it’s not just about languages. It’s about tools. Yesterday at work, I had a discussion with two co-workers about the differences between Emacs and Eclipse as IDEs. Them both being advocates of Emacs, I figured I could get the lowdown on it from them since I only use it occasionally and am not an expert. For writing Java (unfortunately, yes, I must do this at my current job), I prefer Eclipse. But I’m open to using other more productive tools. So I thought to myself, if Emacs is so great, maybe I can find out from one of these guys why, and then perhaps I’ll switch.
Basically, their argument came down to 2 things. First of all, Emacs can be more easily extended than Eclipse which has a clumsy plugin system. And secondly, only Emacs allows you to re-map the key bindings which is extremely helpful for writing code, which is where you spend the majority of your effort compared to other tasks you’d use an IDE for.
However, there’s one feature of Eclipse in particular (there are others too, but I’ll not go into them) that Emacs does not have and neither of the guys I talked to knew of a standard extension out there that already does this. It is the code refactoring features — specifically, renaming variables, classes, and packages.
I admit, any programmer is going to spend 10 times as much of his time on writing code. Renaming things is a rare task comparatively. However, when it must be done in Emacs or editors that don’t support this, you must use regex replacing. And this is both tedious and error-prone. Textual find and replace will always be error-prone since it does not take into account scoping rules of the language.
The fact that Eclipse allows you to do this correctly every time means that the cost of renaming drops to practically zero. This means you no longer have to avoid it, which in this case means you no longer have to plan ahead to try to avoid it in the future. But refactoring and renaming is inevitable. In fact, the more often you refactor the better because it prevents code complexity and messiness from creeping in. But say you wanted to try to think of the right name from the beginning to try to prevent having to refactor it later. It is impossible to know the correct name for a thing when you create it because code is fluid — it never stops changing. Requirements change, goals change, scope changes. And so too must the code. Moreover, as you design a solution to a problem, your understanding of the problem changes. So even if the actual problem doesn’t change, your understanding of how to most efficiently solve that problem will change. Thus, refactoring including renaming is absolutely inevitable to keep the code as close as possible to the model in your mind.
So why should I use an IDE that is oblivious to such things, forcing me to think about more things than I have to. The details which are solely a result of code being expressed in text.
It’s true that such a feature could probably be made as an Emacs extension. I do not doubt this at all. But the fact that it is not already done means that like Lisp over C, or C over machine-code, Eclipse is more powerful than Emacs. And this is obvious looking down the power continuum.
…And as for the whole key bindings thing, personally I almost never find my keystroke rate as the bottleneck. Writing code is usually limited by my conceptual understanding and the translation of ideas into the language I’m writing in. But maybe that’s just me.
So I don’t think the guys I had this conversation with got this out of it, but because of this conversation, I realized that Eclipse is a more powerful tool than Emacs (for editing Java), and I don’t plan on switching any time soon.
Post Post: Since writing this I’ve realized I’ve fallen into the same trap as others. Namely, the trap of not seeing the power of something you don’t understand. I’ve done the exact same thing that users of less powerful tools and languages do when they look at something more powerful. I’ve disregarded the importance of re-mapping key bindings w/o actually getting to know them. It’s possible this is a huge gain. This doesn’t change the fact that Eclipse is more powerful in other respects. So it is really a value-judgment of mine that the cost of typing slowly is less than having to refactor things manually. But I won’t know for sure until I learn and use the more powerful features of Emacs.
The other day I was thinking about a conversation I had about the way relationships can (and do) get so complicated. But back in school, life was simple. You go to school, come home, do your hw, and pretty much have fun the rest of the time.
Of course back then it didn’t seem simple. It felt really complicated. But that’s the way it always is.
Seeing the pattern makes things simple.
That’s exactly how it is in programming and it applies to life. When you’re writing a big program, you write, you write, it’s complicated, complicated. …But after a while, you start to realize, I’ve written something just like this before. You go back and find the other code you wrote, and sure enough, it’s the same thing except for some details. Details that can be abstracted over — creating a function or a template that can be applied in specific instances, depending on the details of the situation.
That, by the way, is the reason why people who learn (and really get) a functional language like Lisp or ML are so die-hard about it. Languages like that let you abstract over pretty much anything.
But you have to see the pattern first.
You can almost always by-pass an obstacle by side-stepping in another dimension.
It’s like a lab rat in a maze trying to reach the cheese. The walls of the maze are between it and its goal the cheese. But if the rat were able to move vertically above the maze walls, it would be able to move forward towards the cheese freely.
It’s a very simple concept. You’re walking. There’s something in front of you. You go around it.
But what are you doing? if you abstract away all the details. What you’re left with is an amazingly powerful problem-solving method. If movement towards your goal is blocked, move in another dimension. This will allow free movement in the previously blocked dimension.
I originally thought of this when I was thinking about time. If we’re moving across time as if it were another dimension of space, you can use this method of side-stepping into another dimension to break barriers in time. (Or more accurately, go around them.)
Oh, I think what triggered it was I was trying to remember a dream I had. So I was thinking about how being in the same place — the same environment and position — makes thoughts come back to you. It’s as if the thoughts are emanating there, and your body is an antenna, that if positioned in a certain way, will pick up certain thoughts; reposition the antenna, and you pick up on another channel.
So I thought, maybe it’s like that with time too. In other words, what time it is also determines what thoughts are emanating. So if you’re closer in time, you’re more likely to have the same thoughts. And that makes sense, because you are more likely to think about the same thing temporally close together. (Reminds me of temporal and spatial locality which are exploited for caching.)
One block you often run in to in the time dimension is forgetting. You don’t remember something that you intentionally mean to. In this case you can side-step the block into another dimension — a spatial dimension. You create something like a reminder note that persists in space, moving forward in time freely. It makes perfect sense. …I wonder how else this can be applied.