Saturday, January 18, 2020

Single Family Housing Market vs. Condo Market – A Good Champ-Challenger Starting Point

Champ-Challenger analysis is an excellent way to provide a validation of one's primary research. If the local housing market is the primary research focus, some competing stats from the condo market could offer an excellent challenge in the form of validation. This comparative approach from the same collective market also provides readers with a context to better understand the primary stats. In valuation analysis, unchallenged stats leave a void that technical valuation experts like the valuation modelers often fail to understand. Here are some specifics:    

1. Presenting the Components – While analyzing the single-family residence (SFR) market, one should analyze and present it separately from townhomes (including PUD/HOA), condos and, coops. Instead of combining them as one category and averaging the results, the component-level analysis would make more sense, as their demand characteristics are usually different. The alternative approach could be (value) weighted averages. 

2. Diverging Components – Aggregate demand is not necessarily the best way to present a particular market, especially when the components do not move in tandem or diverge significantly. For example, the Condo market generally leads the housing market – on the way up and on the way down. In presenting a residential market analysis where the growth is at variance, it's better to explain the SFR market as the Champ while the condo market serves as the challenger, thus clearly portraying the divergence. A combined picture would musk the on-going reality -- a classic mistake many local reporters tend to make.   

3. Power of Challenger – The Challenger analysis is nothing but a validation exercise. When the Champ is meaningfully challenged (validated), the study becomes inherently more meaningful and statistically more significant, considering they are mined off the mutually exclusive and competing market segments. That is why the Property Tax Appeals consultants often hire well-known AVM consultants to develop a challenger AVM to unearth the over-valued parcels on the tax roll. The same concept applies to the other major markets, e.g., challenging a sector Mutual Fund with a competing ETF or a country analysis in emerging Europe with BRICS. 

4. Single Parameter Champ – An unchallenged single parameter champ like the month-over-month median SFR sale price analysis is inadequate (it is necessary but not sufficient) to make informed business decisions. It needs to be challenged both "intra" and "inter." The intra challenger (from within the group) is generally the normalized Median Sale Price per SF. Builders often challenge the market approach with a market-adjusted cost approach. Conversely, the ideal "inter" could be the analysis of the condo market as it is a competing component (sub-market) of the overall housing market, thus leading to the highest and best analytical use of the overall market.   

5. Reducing Market Noise – Normally, the SFR and condo markets remain in sync. When they diverge, one needs to investigate the reason. Since the condo market often takes the lead, either way, it could be tell-tale, pointing to the beginning of a new market swing; for example, if the condo market starts to trend up, SFRs and Townhomes won't be far behind. When they diverge for a long time, one must run the normalized tests to determine if the market internals are diverging. If not, it could be the "monthly" aberration. The 2-Month Moving Average helps reduce the monthly noise. These are the primary tools one must initially apply in diagnosing the reason for market divergence. If those tools are unhelpful, a step-by-step regression model could point to more precise reasons.

6. Challenger Condo Model – If one is forced to build a challenger (regression) model for the condo market, one must remember that the condo modeling is different from the SFR modeling. Condo modeling can be top-down or bottom-up. It's good to avoid top-down modeling as it involves income modeling requiring hard-to-find condo complex-level income-expense data. Since condo sales are at the unit level, the bottom-up market modeling is more common. In addition to the unit-level condo sales data, market modeling does require data related to the unit-level property attributes, complex-level amenities, and general location, which are available on county assessment sites. Under severe time constraints or If the condo data are not easily accessible, a condo sales ratio study could provide a stop-gap challenge.    

7. Apples-to-apples comparison – The SFR market tends to be more homogeneous than the condo market. Though there are Waterfront Mansions, French Tudors, Brownstones, etc. in the SFR market, they do not necessarily form the norm. Conversely, condo markets routinely comprise low-rise, mid-rise, high-rise, skyscrapers, etc. with significantly different amenities. So, one needs to know the apples-to-apples comparison; for example, in NYC, only the low-rise condos are grouped with the SFRs in the same tax class, easing the comparison. In suburban markets, it is prudent to remove the high-rise and skyscraper condos from the sample. Of course, if one uses the Median Sale Price or Median SP/SF, a handful of high-rise condo unit sales would not skew the results. 

8. Data for External Analysts – While collecting the data, the external analyst must know that, nowadays, a vast majority of counties (where the population-level data originates) make at least the sales data available on their sites (as customer service so the property owners can develop their own comparables analysis and validate the market values on the tax roll). Additionally, it's prudent to choose a county that makes the property data elements like Bldg SF, Land SF, Year Built, etc. available to develop the normalized tests or the regression model. Of course, when one has ample time for the project and is undertaking it for the institution, one would be better off buying the data from a national data vendor with many more data variables. Most data vendors offer a small data sample to evaluate the quality of data and the variables they warehouse.

9. The External Challenger – Last but not least, it's good to compare the internal results with S&P Case-Shiller's indices. The Case-Shiller monthly housing indices are available for the 20 major markets (MSAs), both seasonally adjusted and unadjusted. Since the internal analysis is generally seasonally unadjusted, the comparison must be made with Case-Shiller's unadjusted indices. Since the 3rd party data comes with many copyright restrictions, the comparison should be shown in the report with full disclaimers, but not in the presentation. Moreover, considering this is the 3rd party work, it does not make much economic sense to promote theirs; instead, one must always learn to encourage one's own/internal work as the solution. For instance, smart real estate brokers always advise their salespeople to sell in-house inventory as it costs the brokerage a lot of money and time to acquire exclusive listings.

Again, a good champ-challenger analysis is self-selling and convincing as the challenger does most of the selling.


-Sid Som, MBA, MIM
homequant@gmail.com

Friday, January 17, 2020

Median-based Trend Analysis – Despite being Industry Standard – Must be Challenged

(Click on the image to enlarge)

Ryan is interviewing for a Supervising Analyst position with a major think-tank.

Question # 1
Interviewer: These two graphics are derived from the same housing market and reflect the same period. Why do they look so dissimilar? To keep the conversation more professional, let's address the Monthly Median Sale Price as the Champion or Champ and the Monthly Median Sale Price per Square Feet as the Challenger.

Ryan: This dissimilarity proves that an industry-standard Champ needs to be validated or challenged. A prudent market analyst must not take a set of established assumptions for granted; instead, the analyst should subject such assumptions to frequent tests and validations.

Question # 2
Interviewer: Why do you think that is important? What's wrong with a time-tested Champ? Why do you need to introduce an untested Challenger?

Ryan: The Challenger helps identify any on-going shifts in the market; for example, when the prospective buyers are gradually moving towards smaller homes, the demand pattern shifts. The Champ will not capture and reflect that shift in the demand pattern, but the Challenger definitely will. That is why analysts should meaningfully challenge the Champ.

Question # 3
Interviewer: Why are the double top and double bottom formations bearish and bullish, respectively?

Ryan: A double top is bearish because it fails to break out of the congestion, generally with a downward trend. On the other hand, a double bottom is bullish as it's a breakout event with an up-sloping linear trend, often making new highs. 

Question # 4
Interviewer: Explain the difference between the two in more quantitative terms.

Ryan: After peaking at $320,000, the Champ remained sideways, congesting between $300,000 and $310,000. The Challenger trend was almost diametrically opposite with an extremely bullish up-sloping double bottom, eclipsing the prior high of $180/SF. Even the moving average has confirmed the breakout.

Question # 5
Interviewer: If you have to show one of the two graphs to our clients, which one would you choose and why?

Ryan: I would choose the Challenger graph, as it captures and depicts the market's underlying fundamentals. 

Question # 6
Interviewer: Is there a missing piece in this presentation explaining why these two solutions are diverging? If so, how would you present that data?

Ryan: Yes, the Monthly Median Home Size (SF) is missing. SF would explain why they are diverging. I would use a simple table showing all three monthly data variables without showing these two-dimensional graphs.  

Question # 7
Interviewer: Why do you think the bullish R-squared is so much higher than its bearish counterpart?

Ryan: Because that's the right trendline for the slope of the curve. The bearish one does not demonstrate a linear trend, so the resulting R-squared is low.

Question # 8
Interviewer: In that case, what type of trendline would you fit, and how much difference would that make?

Ryan: I would fit a polynomial trendline of the 6th order, expecting reasonably similar results. 

Interviewer: Give me a minute and let me check it out. Yes, you are right; it's 0.794. That's excellent data visualization. Congrats!

Question # 9
Interviewer: Would you use the median-based analysis in business decision-making? If not, how would you improve upon it?

Ryan: The median-based analysis is necessary for quick and dirty analysis but isn't sufficient in making business decisions. I would use an extended percentile data curve like 5th to 95th, without the outliers.  

-Sid Som, MBA, MIM
homequant@gmail.com

Wednesday, January 15, 2020

How to Analyze and Present a Complex Dataset – in 60 Minutes

   ** For New Graduates/Analysts **

We talked about analyzing and presenting a large and complex dataset in 30 minutes in the prior blog post. Would one handle it differently if one had 60 minutes? Here is one approach one might like to consider:

1. While starting out, many young folks tend to underestimate themselves. The very fact that one has been tasked with this critical presentation speaks volumes, so one must learn to take full advantage of this visibility in narrowing the (internal) competition down. These meetings are often frequented by other department heads and high-level client representatives, leading to significant loss of time in unrelated (business) discussions. The best way to prepare for such contingencies is to split the presentation into a two-phase solution where phase-1 leads seamlessly to phase-2. 

2. In a business environment, it's never a good idea to start with a complicated stat/econ model; instead, one must start a bit slow but use one's analytical acumen and presentation skill to gradually force people to converge on the same page, retaining maximum control over the presentation in terms of both time and theme). Therefore, the phase-1 solution should be the same as the full 30-minute solution we detailed in a prior blog post (including the sub-market analysis). Even if the meeting leads to unrelated business chit-chat, off and on, the presenter will still be able to squeeze in the phase-1 solution, thus offering at least a baseline solution. Alternatively, if one has an all-encompassing solution, one could end up offering virtually nothing. 

3. Now that the phase-1 presentation, establishing a meaningful baseline is over, one should be ready to transition to the higher-up phase-2 solution. In other words, it's time to show off one's modeling knowledge. The phase-1 presentation comprised a baseline Champ-Challenger analysis, where the Champ was the Monthly Median Sale Price, and the Challenger was the Monthly Median SP/SF. The presenter used the "Median" to avoid having to clean up the dataset for significant outliers. Here is the caveat of sales analysis though: Sales, individually, are mostly judgment calls; for example, someone bent on buying a pink house would overpay; an investor would underpay by luring a seller with a cash offer, etc. In the middle (middle 68% of the bell curve), the so-called informed buyers would use five comps, usually hand-picked by the salespeople, to value the subjects – not an exact science either.  

4. Now, let's envision where the presenter would be at this stage – 30 minutes on hand and brimming with confidence. But it's not enough time to try to develop and present an accurate multi-stage, multi-cycle AVM. So, it's good to settle for a straight-forward regression-based modeling solution, allowing time for a few new slides to the original presentation. Ideally, the model should be built as one log equation with a limited number of variables (though covering all three major categories). The variables one might like to choose are: Living Area, Age, Bldg Style, Grade, Condition, and School/Assessing District, avoiding the 2nd tier variables (e.g., Garage SF, View, Site Elevation, etc.).

5. One should use Time Adjusted Sale Price (ASP) as the dependent variable in the Regression model, explaining the connection between the presentations (meaning phase-1 and 2) so the audience (including the big bosses like the SVP, EVP, etc.) understands that the two phases are not mutually exclusive, rather one is the stepping stone to the other. At this point, the presenter could face this question "Why did you split it up into two?" The answer must be short and truthful: "It's a time-based contingency plan."

6. At this point, the presenter must keep the regression output handy without inserting it into the main presentation, though, considering it is a log model (the audience may not relate to the log parameter estimates). If the issue comes up, the presenter should talk about the three critical aspects of the model: (a) the variable selection (how all of the three categories were represented), (b) the most vital variables as judged by the model (walking down on the t-stat and p-value), and (c) overall accuracy of the model (zeroing on the primary stats like r-squared, f-statistics, confidence, etc.).   

7. The presenter must explain the model results in three simple steps: (a) Value Step: ASPs vs. Regression values, showing the entire percentile curve, 1st to 99th percentile rather than the median values only, and also pointing out the inherent smoothness of the Regression values vis-a-vis the ASPs; (b) Regression Step: How some arms-length sales could be somewhat irrational on both ends of the curve (<=5th and >=95th) and why the standard deviation of the Regression values was so much lower than ASP'; and (c) Ratio Step: Stats on the Regression Ratio (Regression Value to ASP) as it's easier to explain the Regression Ratios than the natural numbers so spending more time on the ratios would make the presentation more effective.   

8. The presenter should explain the outlier ranges -- the ratios below the 5th and above the 95th percentile, or below 70 and above 143. Considering this is the outlier-free output, it's good to display Std Dev, COV, COD, etc. The outlier-free stats would be significantly better than the prior (with outliers) ones. Another common outlier question is: "Why no waterfront in your model?" The answer is simple: Generally, waterfront parcels comprise less than 5% of the population, hence challenging to test representativeness. (In an actual AVM, if sold waterfront parcels properly represent the waterfront population, it could be tried in the model, as long as it clears the multi-collinearity test as well).  

9. Last but least, one must be prepared to face an obvious question: "What is the point of developing this model?" Here is the answer: "A sale price is more than a handful of top-line comps. It comprises an array of important variables like size, age, land, building characteristics, fixed and micro-locations, etc. so only a multivariate model can do justice to sell prices by properly capturing and representing all of these variables. The output from this Regression model is the statistically significant market replica of the sales population. Moreover, this model can be applied to the unsold population to generate significant market values. Simply put, this Regression model is an econometric market solution. Granted, the unsold population could be comp'd, but that's a very time-consuming and subjective process."

-Sid Som
homequant@gmail.com

How to Analyze and Present a Complex Dataset – in 30 Minutes

** For New Graduates/Analysts **

Often, with minimal time on hand – say 30 minutes – to summarize and present a relatively large and complex home sales dataset, comprising 18 months of data, with 30K rows, and ten variables, here is one approach worth considering:

1. Given the limited time, instead of trying to crunch the data in a spreadsheet, it's better to one's your favorite statistical software like SAS, SPSS, etc. What SAS will do in four short statements (Proc Means, Var, Class, and Output), and a matter of minutes, will need much longer to accomplish the same in spreadsheets. When one is starting out, it's good to take full advantage of these types of highly visible, often gratifying challenges to narrow the potential competition down.

2. It's good to have a realistic game plan. Instead of shooting for an array of parameters, it's better to start with the most significant one, i.e., Monthly Median Sale Price (and the normalized Sale Price per SF). Since the median is not prone to outliers, the dataset doesn't have to be edited for outliers, saving a significant amount of time.  

3. Now that the monthly median prices are there, one should be ready to create graphs for the presentation. While one graph depicting both prices (Y1 and Y2) against months (X-axis) may be created, it's prudent to keep them separated for ease of presentation. 

4. Since basic graphing is more straightforward in Excel (in fairness to the remaining time), it's better to transfer the output from SAS to Excel, ensuring that the graphs are adequately annotated and dressed up with the axis titles, legends, gridlines, etc. One must also remember that just doing things the right is not good enough, one must learn to present things elegantly as well. 

5. Since so much of the data have been summarized and rolled up behind one or two graphs, one must make sure they not only tell the overall story but also convey enough business intelligence to make the presentation look like a well-thought-out business solution in front of the attending EVP, SVP, etc. In the presence of clients, it enhances the bosses' image as well. So, it's smart to add trendlines alongside the data trend, selecting the primary trendline by eyeballing the data trend (linear, logarithmic, polynomial, etc.). Adding a 2 to 3-month moving average (depending on the time series) trendline to iron out any monthly aberrations could enhance the presentation.

6. It's also smart to keep the reporting verbiage clear and concise, explaining the makeup of the dataset, methodology including monthly medians, and how the normalized prices add value and help validate the primary. It's also important to explain the use of the trendline and its statistical significance and the other statistical measures like r-squared, slopes, etc. one might display on the graphs (avoiding the printing of equations on the graphs). 

7. It's good to add some business intelligence to the talking points, sticking to the market being presented but proving the depth of knowledge of that market by highlighting possible headwinds and tailwinds and how they would react to an inverted yield curve. Also, one should address other issues: If there is an on-going structural shift in demand for homes (are more millennial showing interest in that market); what the NAR's prediction of the summer inventory there is; if the inventory of affordable homes on the rise there; and how any expected change to the FHA rules would help first-time homebuyers in general, etc. 

8. One must try to control the conversation by sticking to what one is presenting, rather than what one does not have. For example, out of the ten variables, if only three are used (e.g., Sale Price, Sale Date, and Bldg SF), one should not start a conversation about the other important variables – Lot size, Age, Bldg Characteristics, and Location – that had to be left out ('If I had 30 more minutes' would be unnecessary). If that question comes up, one must answer it intelligently and truthfully, emphasizing, of course, the added utility of the three variables being used.

9. Now, let's assume that one has managed to complete the first cycle (as indicated above) in 20 minutes. In that case, one must go back to SAS and crunch the sales analysis by the sub-markets (Remember: Location! Location! Location!). In other words, one must understand how to walk down on the analysis curve. 

Of course, it's good to have these printouts handy. Just remember, one complete solution is always better than the more aspiring one but 95% complete.

-Sid Som, MBA, MIM
homequant@gmail.com

Monday, January 13, 2020

Use Tiered Prices to Understand Housing Market – The Boston Case Study

** Intended for New Graduates/Analysts **

(Click on the image to enlarge)


As indicated in prior chapters, not all price segments of the same housing market necessarily move in tandem. In a well-distributed and liquid market, the price escalation generally starts at the low price tier and graduates up as the underlying market fundamentals strengthen. Therefore, in the world of research and analytics, Case-Shiller price tiered indices are highly sought after.

The above table demonstrates that while the Low tier (under $395,499) registered an excellent overall growth (between January 2017 and July 2019) of 17.90%, the two upper tiers returned much lower growth rates of 13.35% and 10.59%, respectively, and stayed significantly above the aggregate growth rate of 12.55%. Similarly, while the Middle tier did not perform as good as the Low tier, it did return a better growth rate

than the High tier, remaining above the aggregate growth rate as well. Thus, the segment-wise growth rates prove that one-size-fits-all growth rate does significant injustice to both ends of the price curve.

So what are the primary uses of the Case-Shiller price tier indices? Here are some:

1. Time-adjust Prior Year Tax Roll Values – When analysts and appeals consultants do not have the time or resources to develop new market values to challenge (validate) the current Tax Roll (market) values, the Case-Shiller tiered growth factors could be an ideal independent alternative. Given its independence, it would be a much easier sell than some internally developed heuristic rates. Of course, the counter case could be compelling too: Since the Case-Shiller markets are defined at the MSA level, the County Assessor could make  a case that such time factors are too broad-based to be meaningful at the County (small subset) level.

2.   Challenge Internal AVM Time Adjustments – AVM modelers can use the tiered time factors to challenge the internal AVM time factors. The Case-Shiller factors should be in line with the large subsets; for instance, LA County time factors should be very similar to those of Case-Shiller’s. Therefore, the internal QC supervisors, both private and public, should additionally use these independent factors to test the metallurgy of the internal models. Needless to say, those who develop models at the MSA level would be the big beneficiaries of the Case-Shiller tiered time factors.     

3.   To Periodically Update Mortgage Portfolios – Mortgage portfolio analysts can use these factors to periodically update the portfolio values, without having to develop challenger AVMs. These factors are more meaningful when the mortgage portfolios are rolled up at the MSA or regional level. Conversely, one must be careful in (over) using these factors at the small subset level, unless prior studies show that those subset factors tend to align well with the MSA’s.

4.   Update a Not-so-recent Comparable Sales Pool – When an analyst or a loan officer must work out of a not-so-recent comparable sales pool, these Case-Shiller factors could be used to time-adjust at least the older sales. Of course, it would be a quick fix, but not a real valuation solution per se. This method is especially helpful in some bigger environments (e.g., AMCs) where the time-adjusted comps are often used in batch modes, in place of the 3rd party AVM values.   

5.  Enhance Shelf-life of AVM Values (sell side) – AVM houses sell their values to wide range of end-users like banks, mortgage companies, assessment jurisdictions, SFR rentals, REITs, large tax appeal lawyers and consultants, hedge funds, etc. Many such AVM houses outsource the development of the modeling and value generation to 3rd party research outfits, professors, etc. Case-Shiller tier indices will help them to apply time adjustments and thus enhance the shelf-life of those AVM values, easily up to a year. 

6.   Enhance Life Expectancy of AVM Values (buy side) – Even the 3rd party non-custom AVM values could be quite expensive, e.g., $5 to $10 per parcel. Given that, many end-users can use the Case-Shiller tier indices to enhance the usefulness of the AVM values for several quarters, thus saving a ton of money. In fact, those internally time-adjusted AVM values, oftentimes, are very similar to the new values that the originating AVM houses sell. Anecdotally, some tax appeal consultants use smart college students to have their old AVM values adjusted up to the new target date, meaning the new Tax Roll (valuation/status) date. 
  
7. Develop Analysis for Investors – When researchers are required to develop inter-market (across markets) comparisons for investors, the Case-Shiller Price Tiered indices are more useful than the un-tiered composites as these indices allow true apples-to-apples analysis. Therefore, in order to compare two competing markets, one should compare by the tiers rather than the overall markets, allowing investors to understand the current valuations of each market segment; for instance, when the Low tier makes a significant upward run, investors might shy away from that market segment (and perhaps vice versa). So, the one-size-fits-all market analysis does not work well for the investors.

8.  Flight to Quality in Financial Markets – The three tiered index helps smart investors and traders to swap investments back and forth between the housing market and the equity market as the latter comprises primarily of three price segments as well, i.e., small cap, mid cap and large cap segments. While rotating investments, the smart investors and traders would naturally prefer studying the competing markets by price tiers, to avoid having to rotate from one over-valued market segment to another similarly over-valued market segment (thus defeating the basic purpose).      




9.  Understand True Volatility – When the volatility is important to the organization, developing the tiered price volatility is critical as it does not mask the two ends of the price curve. The above volatility table shows that the Low tier has been lot more volatile than the upper price tiers. The reason is quite simple: the high growth (translating to expanded price range) segment comes with higher volatility while the low growth (resulting in more compact price range) paves the way for lower volatility. Case in point: If the aggregate rate of 3.71% is used across a portfolio, the volatility of the low price tier of the portfolio would be understated while the high price tier would be overstated, thus distorting both price segments. Of course, the middle tier would be in line with the aggregate rate.

When the new values are not immediately available, the Case-Shiller tiered price indices come in very handy while updating the older portfolios, rotating investments across financial markets, and understanding the true underlying volatility of the housing market.   

P.S. These are Case-Shiller’s seasonally-adjusted indices so the month-over-month comparison is fine. While using Case-Shiller’s seasonally unadjusted indices, one should compare July- 019 with July 2018 and July 2017, etc. 

-Sid Som, MBA, MIM
President, Homequant, Inc.
homequant@gmail.com

Sunday, January 12, 2020

Condo Market Trend – Boston, Chicago, LA, NYC & San Francisco

(Click on the image to enlarge)


Lisa, a new college graduate with co-concentrations in Economics and Math, is interviewing for a Research Analyst position.

Question # 1
Interviewer: Take a look at the above graph and tell me if you see any inconsistency in the construction.

Lisa: The Chicago data range is totally out of sync with the rest so it should be graphed as Y2. In other words, instead of just X and Y, I would graph it as X, Y1 and Y2, where Y2 would represent the Chicago value range.

Question # 2
Interviewer: In that case, how would you redefine the Y ranges? Will that rearrangement help the other markets?

Lisa: The Y1 would be compressed down to a range between 200 and 325 with an increment of 25, while the new Y2 range would be between 130 and 160, with an increment of 10. And yes, the rearrangement would help project the other markets better, with a more meaningful Y1 range.

Question # 3
Interviewer: Is there any other room for improvement between the graph and the data table? 

Lisa: Yes. The data table is redundant. The data with the legends can be placed right under the months in the graph, making the table irrelevant. 

Question # 4
Interviewer: What about the growth rates? How would you show them?

Lisa: Anyone can eyeball the overall growth rates. The monthly averages are not indicative of anything meaningful here. I would therefore combine both into one more meaningful graph.

Question # 5
Interviewer: Can you make a comparative analysis of two market groups from the data table?

Lisa: The West Coast markets are moving in tandem, while the East Coast markets have forked. Specifically, LA and San Francisco have produced very similar returns, but New York and Boston are surprisingly divergent. Boston has the best return but NYC has been flat.

Question # 6
Interviewer: You just graduated from Columbia so you will know Manhattan RE quite well. Why do you think the Manhattan market has been flat-lining?  

Lisa: Two reasons: (a) the cap on SALT has been impacting the high-end Co-op and Condo markets in Manhattan and (b) as you know, the foreign buying of the US real estate has tumbled in last two years, which has dealt a serious blow to the Manhattan market, especially the high-end condo market.

Question # 7
Interviewer: If foreign buying is impacting Manhattan, it should impact LA as well. But LA has been strong. Can you explain?

Lisa: Unlike Manhattan, the foreign buyers in that market invest more heavily in private homes, rather than condos per se. Manhattan is essentially a coop and condo market. 

Question # 8
Interviewer: How would you characterize the Boston condo market? Why isn't it showing the same pattern as NYC?  

Lisa: Boston is a more natural market. SALT and foreign buyers do not impact Boston that much. It's guided by its own economic fundamentals. That is why, it has responded well to the falling interest rates in recent months.

Question # 9
Interviewer: Based on the above data, would you recommend any of these markets to our clients and, if so, why?

Lisa: Yes, I would definitely recommend Boston. As I said, Boston is a more natural market. Additionally, for the analysts and modelers, natural markets are always better as the universe of predictive modeling performs better for those markets.  

P.S. These are Case-Shiller’s seasonally-adjusted indices so the month-over-month comparison is fine. While using Case-Shiller’s seasonally unadjusted indices, one should compare July 2019 with July 2018 and July 2017, etc. 


-Sid Som, MBA, MIM
President, Homequant, Inc.
homequant@gmail.com