blog.sojoodi.com

December 30, 2011

Top 50 movies of all time list

Filed under: Interesting — S Sojoodi @ 1:47 pm

This is an interesting tidbit.  I was reading about how the video game “Call of Duty: Black Ops” grossed $650m in the first five days after its release, a bigger launch than most blockbuster motion picture launches. My curiosity led me to a list of top 50 movies of all time.  However, what I was really curious about is what the top 50 movies of all time are in their era.  That is, what the top-grossing 50 movies of all time are in inflation-adjusted (2010) dollars.

Based on the top 500 list from IMDB and using the CPI data from US Department of Labour (source) to convert box office sales to 2010 dollars, we have the following list:

Title Inflation Adjusted (2010 $’s) Rank
Gone with the Wind (1939)  $3,115,600,762.88 1
Snow White and the Seven Dwarfs (1937)  $2,799,566,370.14 2
Star Wars: Episode IV – A New Hope (1977)  $1,658,151,402.15 3
Bambi (1942)  $1,374,833,049.08 4
The Sound of Music (1965)  $1,129,546,487.24 5
One Hundred and One Dalmatians (1961)  $1,115,518,394.65 6
Jaws (1975)  $1,053,531,598.51 7
The Exorcist (1973)  $1,004,395,720.72 8
E.T.: The Extra-Terrestrial (1982)  $982,580,124.99 9
The Jungle Book (1967)  $925,805,611.26 10
Titanic (1997)  $816,012,471.23 11
The Sting (1973)  $783,621,621.62 12
Doctor Zhivago (1965)  $773,187,174.60 13
Avatar (2009)  $772,915,033.31 14
Star Wars: Episode V – The Empire Strikes Back (1980)  $767,653,006.29 15
Mary Poppins (1964)  $719,400,000.00 16
The Godfather (1972)  $703,138,409.95 17
The Graduate (1967)  $681,394,258.56 18
Star Wars: Episode VI – Return of the Jedi(1983)  $676,599,790.78 19
Butch Cassidy and the Sundance Kid (1969)  $607,720,441.42 20
Grease (1978)  $606,387,730.06 21
Love Story (1970)  $597,814,432.99 22
Indiana Jones and the Raiders of the Lost Ark(1981)  $581,272,067.90 23
The Rocky Horror Picture Show (1975)  $566,785,481.52 24
American Graffiti (1973)  $564,639,639.64 25
Airport (1970)  $564,603,987.11 26
Star Wars: Episode I – The Phantom Menace(1999)  $564,059,224.44 27
The Dark Knight (2008)  $540,004,186.24 28
Jurassic Park (1993)  $538,262,366.78 29
Blazing Saddles (1974)  $528,417,849.90 30
The Towering Inferno (1974)  $512,941,176.47 31
Shrek 2 (2004)  $503,709,295.12 32
Ghostbusters (1984)  $500,623,676.61 33
Beverly Hills Cop (1984)  $492,567,747.83 34
Spider-Man (2002)  $489,205,056.98 35
Forrest Gump (1994)  $484,970,855.11 36
The Lion King (1994)  $483,105,359.10 37
Home Alone (1990)  $476,633,136.76 38
Animal House (1978)  $473,447,852.76 39
Close Encounters of the Third Kind (1977)  $461,541,254.13 40
Pirates of the Caribbean: Dead Man’s Chest(2006)  $457,445,996.55 41
Smokey and the Bandit (1977)  $455,920,120.53 42
One Flew Over the Cuckoo’s Nest (1975)  $453,828,996.28 43
Rocky (1976)  $449,161,403.27 44
Superman (1978)  $448,765,765.71 45
The Lord of the Rings: The Return of the King(2003)  $446,685,852.91 46
Batman (1989)  $441,606,334.13 47
Spider-Man 2 (2004)  $430,896,668.47 48
The Passion of the Christ (2004)  $427,311,093.56 49
Back to the Future (1985)  $426,700,075.43 50

October 10, 2011

Quick math look up tables

Filed under: Interesting — S Sojoodi @ 2:16 pm

I’m writing down these table mostly for my own future quick reference.  Feel free to use.

Percentage-to-Fraction look up:

1/1 100%
1/2 50%
1/3 33.33%
1/4 25%
1/5 20%
1/6 16.67%
1/7 14.3%
1/8 12.5%
1/9 11.11%
1/10 10%
1/11 9.1%
1/12 8.33%
1/13 7.7%
1/14 7.14%
1/15 6.67%
1/20 5%

 

Sigma Rule (Normal):

sigma probability
0.5 38.29%
1 68.27%
2 95.45%
3 99.73%
4 99.99%
5 99.9999%
6 99.9999998%
7 99.9999999997%
10 100.00%

 

In Excel use: =2*NORM.DIST(E2,0,1,TRUE)-1

More to come.

January 18, 2011

Privacy concerns and issues with Scribd and social networks (Facebook in particular)

Filed under: Interesting — S Sojoodi @ 2:32 pm
Screen shot of my now-deleted Scribd account showing my friend names and images pulled from Facebook

Screen shot of my now-deleted Scribd account showing my friend names and images pulled from Facebook

Today, to my dismay I found a portion of my private Facebook social graph in the public search pages on Google under the social profile page of my now-deleted Scribd account (see picture above).

I can tell that the data was pulled from Facebook because of the particular profile image which is only on my Facebook profile (not LinkedIn, not Twitter, and not Google Buzz).

I can understand that Scribd may want to build their own social network for its users (haven’t thought about why they would and don’t care). And I certainly have no issue with Scribd publishing the particular social graph about willing users.  But my numerous issues are mostly related to the way they obtained this information and that they did not ask for my permission before proceeding to share this data with the whole world.

Here are the particulars of my issues with what I saw today:

  • I use (or used to use as of now!) Facebook for social fun and keeping in touch with friends online, not as a professional tool, and certainly not as a public publishing platform.  My Facebook profile picture, my friends’ profile pictures, etc are not ones I want the CEO of the next company I work for to see in the first or second page of Google Search results for my name! (the attached picture is exactly what you will see if you Google me and dig around for a bit) So the fact that Scribd did not respect my choice to keep that information private is a serious problem.
  • I have no idea how Scribd got this info.  I have not signed in using my Facebook account. Actually, before this morning I used to have a Scribd account and am sure I would not have given Scribd my Facebook login.  This concerns me.
  • The data that Scribd published is public (without my choice) and very sticky. So even when I noticed that the whole world can see many private data about me, I cannot remove the pages immediately and am at the mercy of Scribd to pull the data down and then Google to remove the data from their caches.  This is exactly why my profile is private on Facebook.  Mark Zuckerberg might ask: “what do you have to hide?”.  To this I would answer: “nothing, but I don’t choose to publish every detail of my online life on a public forum”.

I didn’t wake up today thinking I would go on a rant about Consumer privacy online, but the surprise of seeing my private data out in the open made me take the time to write about this.  I am surprised that a company with a great service, impressive investors and team (http://www.crunchbase.com/company/scribd), and $12.8m in funding would make such a mistake.

I know we live in an age where privacy is considered a relic in the online world. So be it, I have asked Scribd to pull my personal data down and will pursue this issue until the data is all removed.  In the mean time, to prevent future instances of this and to counter the intrusive nature of Facebook Like button, etc I recommend the following:

  • Sign out of Facebook and Twitter after use every time.  Or better, have a separate browser for using Facebook and Twitter and use another browser when you’re just searching the web and doing work.
  • Clear your browser’s cookies after each visit to Facebook.  This will ensure no 3rd parties take your private data without your permission (you can still sign in or choose to give your Facebook information to 3rd parties if desired)
  • Search yourself on Google, Bing, etc every once in a while to see if you’re involuntarily sharing more than you intend to. Then work with the offending website to pull your data down.
  • Disconnect yourself by closing down your Facebook, etc account.  I haven’t done this yet as I find Facebook a valuable service.  But everyone has his own way of doing cost-benefit analysis and his own limit. I’m just getting closer and closer to the last straw on this. If my privacy cannot be managed effectively, I will reluctantly but surely remove my Facebook account.

In the end, I’m thinking some soul-searching is in order to see whether or not it makes sense to have a Facebook account.

December 23, 2010

Renting movies on iTunes – How Blockbuster and Rogers Video will survive for the time being

Filed under: Interesting — S Sojoodi @ 5:19 pm

The other night my girlfriend and I decided to watch a movie and inline with the spirit of holidays (and the cold weather in Toronto) we decided to give iTunes a try instead of going downstairs to rent a DVD from Rogers (there’s a Rogers Video literally down the stairs).  Here is what’s broken about the whole process from a user’s point of view:

  • Price-wise, I have no incentive to go to iTunes even though price is not really an issue for me as a non-frequent movie watcher.
  • Download time: maybe it’s my connection (it isn’t because I’ve done better before!) but the movie took 4 hours to download which was a downer because we ended up having to watch it the next night.  That basically ruined the experience.  Apple should really look into better streaming options (local servers or peer-to-peer come to mind).
    • What’s more is that due to bandwidth caps from my ISP, I would think twice before renting a 1.25GB movie which self-destructs in 24-48 hours!
  • iTunes cycles through a few pages to “re-establish” my payment info.  Interestingly enough, the pages were pre-filled with my info anyway.  So I’m not sure what the purpose of this exercise was.
  • Search and Browsing through titles was not pleasant because of iTune’s slow interface on my PC. (My MacbookPro is as slow, so it’s the networking within iTunes)
  • This is more general about watching movies on computers and no specific to just iTunes.  Until everyone has a media centre computer, a Web/Apple/GoogleTV, or some sort of web-smartbox, you cannot complete with the convenient of a DVD player. Here is what I mean
    • Connecting my laptop to TV through HDMI is a pain.
    • No remote control for pausing and fast-forwarding/rewinding. I have to get off the couch, play with the mouse until I see the pointer, move to FF/RWD button, then move mouse (visible on TV screen not the laptop) to Play button and click again.
  • Worst of all for iTunes/Netflix, great for Rogers Video/Blockbuster/etc is that putting together the full connection to the TV (HDMI, iTunes, Downloads page, Search, switch to full screen, etc etc etc) is totally non-geek-hostile!  No normal person would want to go through this pain at this point.

All in all, I believe that the process is still not fully baked.  But once TVs become smarter I can see a lot of the issues addressed.  At that point I’ll revisit the trade-off of not leaving the convenience of our living room versus the time and planning effort required to download the rental movie.  For now, I’ll stick to DVDs.

November 23, 2010

Patents and such

Filed under: Uncategorized — S Sojoodi @ 11:18 pm

Today I found out that the only patent I have ever been an inventor (co-inventor) of has been filed under an incorrect variation of my name: SOJOODIL!  ”Remote Control System and Method”:

http://www.wipo.int/pctdb/en/wo.jsp?WO=2009135312

http://www.sumobrain.com/patents/wipo/Remote-control-system-method/WO2009135312.html

I’m getting excited about one patent. There are many people in the world who have their names on quite a few patents and don’t talk about it, but I’m darn proud of this one and am going to write about it.  That project was special because my team also managed to develop the BlackBerry® software piece of the system based on this patent and won the best of WES award for our client in Consumer category in 2008 (Link to article on WES 2008 Consumer category).

December 19, 2007

My GMAT Experience!

Filed under: Uncategorized — S Sojoodi @ 8:59 pm

I was busy (read bootcamp!) preparing for and writing the GMAT over the past two weeks. That’s why I haven’t written much lately, but I thought I’d share with you my fun experience with the GMAT!

First thing, this was a really quick decision. I had thought of writing this test for a while, but hadn’t really had the opportunity to do it. So, this time around, as soon as I had time to breathe, I decided to just go for it and get it over with. My strategy was simple: just pick a date 2 weeks from now (well originally I was more ambitious and wanted to do it in one week, but I had a humbling experience). Then everyday, in an all-you-can-eat-buffet-like approach fill your brain with training before it realizes what happened and gets tired!

I stuck to a strict and intensive schedule to make sure I do well on the test. I tried to do a timed test a day, but in the end I had 4 or 5 tests under my belt. I highly recommend the GMATPrep and POWERPREP tests. I also did most of the GMAT Official Review (11th ed, the orange one) and read through some parts of GMAT 800 by Kaplan.

In the final days of prep, I came across this website which I wish I’d seen much earlier: Beat The GMAT. I highly recommend having a look at this. I also took review notes while studying the books and writing tests. I’ve included a part of the document here (GMAT study notes). If you are interested in the full document, send me an email. (I’m doing this to see how many people are actually interested)

Another piece of the strategy was time management. This changes from person to person, but I came up with and used the easy-to-memorize V19-40-30-20/Q17-40-27-20 rule for myself. This cryptic rule basically means: For Verbal, be on question 19 or further when the clock shows that you have 40 minutes left, and be on question 30 or further when it hits 20 minutes. Same for the Quantitative section. I have a spreadsheet for helping with making these rules. If you are interested in seeing and using it, let me know.

Also, make sure you don’t forget preparing for the Analytical Writing Assessment part. There are good resources online (listed at the bottom) for practicing and improving time-management for these essays.

In the end, I had a blast on the test day. And the result was amazing, too. Actually, way better than expected! I had studied hard to get a score above 700. I was pleasantly shocked to see my score in the end. If I were a little better at marketing, I would change the title of this post to: “get an unbelievable GMAT score in two weeks!”

Hope this post is of use.

Last but definitely not least, I’d like to thank Kevin Au and Todd Presswood for sharing with me invaluable information regarding the test.

Additional Material

Email me for this stuff:

  • The full study notes
  • The Time-Management Rule Maker spreadsheet
  • My actual score if you’re interested

Online Resources

Here is a list of webpages that I may have used.

  • http://www.4tests.com/exams/examdetail.asp?eid=31
  • http://www.crack-gmat.com/gmat-test.htm
  • http://www.projectgmat.com/problems.html
  • http://beatthegmat.blogspot.com/2005/08/reflecting-on-my-gmat-experience.html
  • http://www.mba.com
  • http://www.kaptest.com/Business/Business-School/BU_home.html

December 2, 2007

Should a small business give discounts

Filed under: Marketing — S Sojoodi @ 11:33 pm

While reading P. Barrow’s The Best-Laid Business Plans, I came across an interesting argument he had against giving discounts as a small business. The reason for writing this post was to inspect the simple but condensed table of data he provided (“Pricing Ready Reckoner”) which in the space of less than a third of a page, makes a very clear case against discounting.

Here’s the argument. In order to maintain the same level of profit, you would have to sell much more depending on your original Gross Margin and the amount of Discount. The amount of extra sales needed in each case is summarized in the following table as percentages.

  Existing Gross Margins (%)
  5 10 15 20 30 40
% price discount            
1 25% 11% 7% 5% 3% 3%
3 150% 43% 25% 18% 11% 8%
5   100% 50% 33% 20% 14%
10     200% 100% 50% 33%
15       300% 100% 60%

The math is simple if you do not claim a portion of the margin, you have to make up for the difference by increasing sales. But the engineer in me wants to make the calculations more formal:

G=gross margin (%)
R=retail price for unit
D=discount (%)
S=number of items sold, S’=number of items sold (in discounted case)

Without discount we have:
Gross margin=R×G
Cost of Goods Sold per unit=R×(1-G)

With the discount, on the other hand:
Sales revenue per unit=R×(1-D)
Cost of Goods Sold per unit=R×(1-G), remains the same
New gross margin=R×(G-D)

To make the same level of profit
S’×R×(G-D)=S×R×G

Which means we have to sell (S’/S-1) percent more to compensate for the discount:
100×[G/(G-D)-1]

The rest is easy: put the Formula in a spreadsheet and you get that nice table above. There is another table in the book which shows the flip side of the situation: price increase. But I spare you.

As a final note, I believe that you could still make a good case for giving discount on your products/services for a variety of reasons such as building relationship in hopes of recurring customers, gaining market share, etc. However, Paul’s argument is interesting and makes the assumptions that your customers buy from you because of the flexibility, quality, and the service provided. Further, I would like to add to this list the fairness and competitiveness of original pricing.

Simple Search Feature in Rails/MySQL (Part 2)

Filed under: MySQL,Rails — S Sojoodi @ 9:42 pm

I meant to make this post much earlier, but all the start-up fun kept me from doing it. The first part of the post was published on Oct 17 (Simple Search Feature) and covered my Do It Yourself solution for the catalog search feature in Giftify.

The problem with the original indexing scheme was that it created separate “lookup word” entry for words that were inclusive of one other. For example, it would be possible that the word “rose” appeared in the descriptions of gifts 1 and 3 and “rosemary” in 2 and 3. Note that the correct indexing scheme would dictate that all items of “rosemary” are a subset of those of “rose”.

Incomprehensive Indexing

 

One way of fixing this problem is using a query with the LIKE statement in SQL to get all LookupWords that are similar to the actual search term and then perform a complex query, which will have to be created dynamically based on the results of the LIKE query. The complexity of this solution, the shakiness of LIKE/DISTINCT constructs, and the fact that a lot of stuff needs to happen for every search led me to implement a better indexing scheme.

I have mentioned the solution above: when indexing, make sure that the items in “roses” are all included in the set of items associated with “rose”. This way when looking up the word “rose” in the catalog, all three items are returned.

The Ruby implementation of the improved indexing scheme follows. Note that the trivial case of plural words containing the singular ones is handled via pluralize and singularize functions in Rails.

$lookup_word_to_item_map = {}

Item.find(:all).each do |item|
  composite_search = item.name+" "+item.description+" "+item.categories_string

  # take all the words (alpha) and array-ize
  composite_search_array = composite_search.scan(/[a-z]+/).compact.collect {|w| w.singularize}

  # remove all words that are less than 3 letters long
  composite_search_array.collect! {|w| w unless w.size<3}
  composite_search_array.compact!
  composite_search_array.uniq!

  # add data to hash
  composite_search_array.each do |word|
    $lookup_word_to_item_map[word] << item.id unless $lookup_word_to_item_map[word].include? item.id
  end
end

# in ruby str1[str2] returns nil if str1 doesn't contain str2
$lookup_word_to_item_map.keys.each do |baseword|
  extra_items = $lookup_word_to_item_map.keys.collect do |biggerword|
    $lookup_word_to_item_map[biggerword] if biggerword[baseword]
  end
  $lookup_word_to_item_map[baseword] << extra_items
  $lookup_word_to_item_map[baseword].flatten!
  $lookup_word_to_item_map[baseword].compact!
  $lookup_word_to_item_map[baseword].uniq!
end

A quick note on efficiency. The indexing process becomes slower and the index table becomes larger as a result of this scheme, but the trade-off against making on-the-fly search faster is definitely worth it.

Hope this was useful and thanks for dropping by.

November 23, 2007

Foreign Exchange Rate Arbitrage Detection

Filed under: Uncategorized — S Sojoodi @ 7:21 pm

Here’s a little brainteaser to think about while commuting to work on the Greyhound bus! (that’s another story maybe for another post).

If you go to x-rates.com or Yahoo Currency Converter you can obtain a table of exchange rates for world’s major currencies. Here’s a sample:


USD U.K. £ CAD Euro AU $
USD 1.00000 2.05639 1.01142 1.48250 0.87440
U.K. £ 0.48629 1.00000 0.49184 0.72092 0.42521
CAD 0.98870 2.03316 1.00000 1.46574 0.86452
Euro 0.67454 1.38711 0.68225 1.00000 0.58981
AU $ 1.14364 2.35177 1.15671 1.69544 1.00000

* (USD to CAD element was changed from 0.98870 to 1.0 in the example below)

The data is essentially a matrix as follows (with a bit of rearranging and reducing the number of currencies):

FX table

C for Canadian Dollar
U for US Dollar
E for Euro
CU: price of 1 Canadian dollar in US dollars as reported in the table
UC: price of 1 US dollar in Canadian dollars as reported in the table

FX-table multiplied by itself

This matrix, in an efficient, arbitrage-free market, should simplify to:

Ideally The Result should simplify to this.

Therefore, the test is to form the Foreign Exchange Matrix, and then calculate the following expression in Matlab, Excel, etc:

Test formula , where n is the number of rows in the FX table. (3 in the above example)

If the resulting matrix has non-zero elements, it is theoretically possible that arbitrage can happen. Note that “non-zero” is relative as it is unlikely that you get all zeros because of round off errors. But, dividing (the absolute value of) the biggest element by n gives you a good idea about the potential amount of arbitrage. For example, applying the formula to the sample data at the beginning of the text shows that the inconsistency in the table of rates is, more or less, about 0.678%. (Again, the sample table was tampered with for the sake of the example. Using the original data, the arbitrage potential was 0.00068%.)

I recommend that this formula be used for a quick test as opposed to a full diagnostic review of your FX table. There are definitely better and more elaborate ways of arbitrage detection and pinpointing what algorithm can make you the most amount of money. This post is only meant to show how a little bit of Linear Algebra can be used as a quick test of online data.

Also, the chances of making money this way is very low. That is because markets are quite efficient these days. Also, all FX trading accounts have a trading spread which is basically charging you a small premium every time you do a trade. This small amount is large enough to cover the arbitrage opportunity as detected above. If you know of a trading account that isn’t like that, I would love to know about them ;)

Let me know what you thought of this post. Do I have what it takes to be a Quant!? If you’re interested in the spreadsheet that does the calculations, let me know.

November 21, 2007

Coupon Codes!

Filed under: Marketing — S Sojoodi @ 5:07 pm

Last week, I spent a bit of time on sales coupons for Giftify. Since I have no formal marketing education, I had to rely on common-sense and intuition. Please, feel free to give me feedback on this. I’m really interested to know what other people think of it.

The objective is to be able to sell at a discounted price to select individuals, groups of individuals, or everyone who carries a valid coupon code issued by the vendor. Sounds simple enough.

I thought of a couple of aspects of coupons:

Discount Logic

  • Percentage coupon; e.g. 10% off the gift price
  • Absolute amount coupon; e.g. $15 off the gift price
  • Combinable with other promotions

Authorized User

This is who the coupon is meant for

  • a specific user; e.g. A frequent customer, reviewer
  • a group of people; this is similar to the individual user, but a little different in the database model
  • general public; in this case you want to let anyone with a promosional coupon code to be able to benefit from the discount. e.g. GoDaddy promo codes.

Discounted Merchandise

This is more an implementation issue again.

  • Authorize use of coupon on select gifts in the catalog
  • Coupon is valid on every item in the catalog

Profit Safety Net

Is the objective of the coupon (promotion, appreciation, etc) worth losing money on certain orders from users with the coupons. For example, a person has a 15% discount coupon, but she wants to use it on a gift with a margin of 10%.

  • Coupon allowed to override minimum gift price
  • Coupon has a limit

As for implementation, I will give you a brief overview of my design. If there’s interest in seeing more details, please let me know and I will try to make a post. In summary, I used a Single Table Inheritance approach to model coupons in our database. That is, all various types of coupons (Percent/ Absolute/ Limitless/ Limited) are stored in a single table. I store coupon details, as well as the type in the table. In the controller code, when retrieving a coupon from the database, I look at the type first and use the correct code based on that.

That’s all. Please, let me know what you think of this post. Was it useful to you? Do you know of a better resource online which captures the information in this post and more?

btw, here’s a coupon code for Giftify to show my appreciation for reading this blog ;) SOJOODIBLOG

Next Page »

© 2012 Sahand Sojoodi
Powered by WordPress