Renthop: predicting interest level for rental listings


Renthop is a New York based apartment listing website. It uses an innovative algorithm to sort and present the listings to the visitors. As part of our third project at NYCDSA, we participated in the Kaggle’s Two Sigma Connect: Rental Listing Inquiries machine learning competition. Our challenge was to predict the interest level in an apartment rental listing. We had 49,000 labeled entries and made predictions on 74,000. Target labels were high, medium and low.

The scope of this problem represents a typical machine learning challenge faced by many organizations. Data consisted of numerical, continuous, ordinal, and categorical variables. It had pictures, descriptions of the units and geospatial information. Distribution of the classes was also imbalanced. Considering all the above, it's safe to conclude this problem is a much better proxy to real world situations in which Data Science can and must be employed, a somewhat uncommon characterization for Kaggle competitions, that tend to supply cleaner data and focus mainly on predictive models.

Feature Engineering

Our initial approach to this problem was to slice and dice the given variables in a variety of ways so that we could capture information contained within them. The almost readily available ones, that didn't require much treatment, are the following:

  • Price (continuous): Log transformation
  • Bedrooms (integer): Unchanged
  • Bathrooms (integer): Unchanged
  • Price/Bedrooms and Price/Bathrooms: Used by adding 1 to the denominator and taking the logarithm
  • Latitude and Longitude (continuous): After an initial analysis, it was determined that, despite not having all coordinates pointing to the New York City area, that the ones that didn't were usually incorrect and referred to listings from NYC. Thus, we forced all coordinates into a rectangular area around the city.
  • Photos (list of urls): As a first approach, used simply the number of photos for each listing.
  • Description and Features (text, list): As a first approach, extracted simple metrics such as the length and number of words.
  • Listing creation date/time: Extracted features such as the hour of the day, day of the week and of the month.

These initial features were already very informative in that it was possible to obtain cross-entropy losses below 0.6 (top score is still above 0.500 as we write this article) with certain models. From these features, the most significant one was found to be the price.

Density plot of prices for different interest levels

Density plot of prices for different interest levels. Price is on a logarithmic scale.

The listing creation hour can also tell us something about the interest level.

Interest Levels by listing creation hour

Interest levels by listing creation hour

Other Categorical Variables

An approach sometimes employed on categorical variables consists on encoding them (transforming to integer values) and transforming them into dummy variables. The predictors in question are Manager ID, Building ID, Street Address and Display Address.

Listing Features

When it comes to using apartment features as a predictor, we had to start by taking a good look at the data. On our raw data, the listing features were either presented in a very structured way, such as in [Elevator, Laundry in Building, Hardwood Floors], or in a very unclean way, such as in [** LIFE OF LUXURY FOR NO FEE! * SPRAWLING 2BR/2BA MANSION * WALLS OF WINDOWS *

Having this, we determined the best approach to be capturing features by using regular expressions, while being careful to check that all matches were relevant to the intended feature. We extracted 56 different listing features in total. The code that achieves this can be found here.

Out of the features extracted, the ones that seem to have the greatest impact in terms of the interest levels, taking into consideration the amount of listings they affect, are Hardwood Floors and No Fee.

Distribution of interest levels with respect to hardwood floors

Distribution of interest levels with respect to hardwood floors listings

Distribution of interest levels with respect to no fee listings

Distribution of interest levels with respect to no fee listings


After obtaining all the photos, taking in total over 80GB of storage space, we decided to extract a few useful values from them.

First, we looked at their dimensions and included them as predictors in our models (namely, average dimensions per listing, as well as the dimensions of the first photo).

We also extracted the sharpness of each image, having from them a somewhat surprising conclusion: contrary to our initial assumptions, high interest listings have slightly less sharp photos as compared to low interest listings.

Average photo sharpness for each interest level

Average photo sharpness per listing for each interest level

We also used Clarifai API to extract top 15 labels from each image. There, convolutional neural networks are used to learn features present in the image and a probability estimate is given for each label extracted. Though under gradient boosting relative influence, image features showed some positive significance, during training and testing they showed very low signs of improving the model.

Finally, in order to actually make use of the photo contents, we resized all photos to 100x100 squares, to be fed into a convolutional neural network model (more details on that below).

Sentiment Analysis from Description (NLP - Natural Language Processing)

An R package for the extraction of sentiment and sentiment-based plot arcs from text

The name "Syuzhet" comes from the Russian Formalists Victor Shklovsky and Vladimir Propp who divided narrative into two components, the "fabula" and the "syuzhet." Syuzhet refers to the "device" or technique of a narrative whereas fabula is the chronological order of events. Syuzhet, therefore, is concerned with the manner in which the elements of the story (fabula) are organized (syuzhet).

The Syuzhet package attempts to reveal the latent structure of narrative by means of sentiment analysis. Instead of detecting shifts in the topic or subject matter of the narrative (as Ben Schmidt has done), the Syuzhet package reveals the emotional shifts that serve as proxies for the narrative movement between conflict and conflict resolution.

  • Afinn - developed by Finn Arup Nielsen as the AFINN WORD DATABASE
  • Bing - developed by Minqing Hu and Bing Liu as the OPINION LEXICON
  • NRC - developed by Mohammad, Saif M. and Turney, Peter D. as the NRC EMOTION LEXICON

Structure the Unstructured

Syuzhet is concerned with the linear progression of narrative from beginning (first page) to the end (last page), whereas fabula is concerned with the specific events of a story, events which may or may not be related in chronological order … When we study the syuzhet, we are not so much concerned with the order of the fictional events but specifically interested in the manner in which the author presents those events to readers.

What is Sentiment Analysis?

Opinion mining or sentiment analysis  Computational study of opinions, sentiments, subjectivity, evaluations, attitudes, appraisal, affects, views, emotions, etc., expressed in text.  Reviews, blogs, discussions, news, comments, feedback, or any other documents “Opinions” are key influencers of our behaviors.  Our beliefs and perceptions of reality are conditioned on how others see the world.  Often when we need to make a decision, we often seek out the opinions of others. In the past,  Individuals: seek opinions from friends and family  Organizations: use surveys, focus groups, opinion polls, consultants.

Using NLP for Feature Extraction

We used the syuzhet package to take the descriptions from the Kaggle dataset and produce a scoring mechanism that valued each description on a range of emotions:

Anger * Anticipation * Disgust * Fear * Joy * Sadness * Surprise * Trust

In addition we composed additional features for these datasets based on valence as being either:

Negative vs. Positive

Models Implemented

Several machine learning models were implemented, including XGBoost and other decision trees based models, as well as neural networks.

Neural Networks

Based on the fact that more than 99% of the data, in terms of size, made available to us was in the form of photos, we had to make use of it on our predictions. The challenge here is to extract from them more features than the readily available ones: number of photos per listing and their dimensions.

We identified four possible approaches (not mutually exclusive):

  • Feature Engineering: extract some manually chosen statistics, such as brightness, sharpness, contrast, etc.
  • Train a separate convolutional neural network model that classifies the images based on what they show. Possible categories could include kitchen, bathroom, floor plan, fitness center, street view, etc.
  • Train a separate model with only the photos, then feed the results to our main model.
  • Extract anonymous features by training on an unified model.

Initially, as a preparation step to implementing the last approach, we trained a simple neural network model with four dense layers, taking as input the basic listing features already provided to us with just a few tweaks, and gradually developed our feature engineering and saw our predictions improve.

Simple neural network model

Simple neural network model

Ultimately, the model yielding the best results, achieving a Kaggle score of 0.58854, used the following feature transformations:

  • Coordinates (longitude/latitude) outside the New York City area snapped to a rectangle around it,
  • Logarithm of price, price per bedroom and price per bathroom,
  • Count of words/characters in description, words and quantity of apartment features,
  • Time of day, day of month, day of week,
  • Sentiment analysis,
  • Parsed apartment features as 56 dummy variables,
  • Dimensions and sharpness (log) of first photo, as well as average dimension and sharpness (log) of all the photos per listing,
  • Manager id: encoded the top 999 managers in terms of the amount of listings, mapping all the remaining ones to a common category, and then applied an embedding to 10 activations. Similar to generating 1000 dummy variables and then applying a dense layer with 10 activations.
Neural network embedding for Manager ID

Neural network embedding for Manager ID

When it came to use the actual photos, we were faced with a few obstacles. First, the fact that they had many different sizes and aspect ratios. Second, that each listing had a different number of photos. And finally, storage restrictions on the GPU servers we used. By resizing all of them to 100x100 square thumbnails, we were able to squeeze them into about 2.3GB of storage. To deal with the fact that different listings had different amounts of photos, and also given the memory space inflated photos take as inputs to a convolutional network (100x100x3x4 ~ 120KB), we decided to only take the first photo as a first approach, and also by taking only 20 thousand training samples.

Convolutional neural network

Convolutional neural network (layers refers to the commonly used term channels in this context)

After training for a few hours, the results from the validation set didn’t seem very promising. Given that, in order to have a good predictor, having good training data is key, we decided to extract the activations from the convolutional part of the network, freezing its (trained) weights and therefore not requiring the photos for training anymore, and expand, up to validation data, to the entire training set. This did give us an improvement but, oddly, not enough to beat our previous Kaggle score that didn’t use the photos as input. This seems to go against the simple intuitive idea that, given a model more data and freedom (weights), it should be able to make better predictions if properly trained. Truth be told, almost no tuning went into perfecting our model, especially when it comes to its architecture, so there’s certainly a lot of room for improvements. We did use validation loss to drive the learning rate decay, as well as early stopping and the choice of weights to be used for the final prediction.

Looking forward, potential improvements can be obtained by training with photos on all listings by loading them as the training occurs (already implemented), by considering all the photos from each listing, possibly having a model training exclusively on the photos (with its activations used as an input on another model). Also, extracting more information from the descriptions and applying some transformations to the coordinates, as well as having shifted versions for time-based features (as they are all periodic), could all lead to an improvement in our predictions.


These neural networks were implemented using the Keras framework over the Theano backend. All the code can be found on our project’s github.

In order to have a highly automated and controlled environment for our features, where we ensure that training and test data go through the same transformations from raw data to becoming inputs for our neural networks, we developed a preprocessing framework, with many possible transformations, that delivers on that promise. After setting up the preprocessor, with all the different pipelines for the different types of data processed, getting the data is as simple as calling load_and_transform(test), with test being False for train and True for test data.

With the generator module, this framework extended to data loading/generation run in parallel as the network is being trained. This is a critical functionality when all the photo data, once loaded, would exceed the memory capacity of our system.

Domingos Lopes
Domingos Lopes
Domingos Lopes is a Mathematics PhD from NYU, who's also a machine learning specialist and very skilled programmer. His latest works include building a painting classification model, as well as a rental listing interest level predictor, using convolutional neural networks on Google Cloud GPUs. He's also a contributor to the open source community by having implemented Chromecast support for an Android podcatcher. He believes well structured and automated code, combined with a strong statistical foundation and a steam of creative thinking, are key to the success of every data science project.
Abhishek Desai
Abhishek Desai
I'm is interested in all things mechanical, but particularly the ability to use machine learning and algorithm design to to locate the areas of development where efficiency can be harnessed to advance business interests. With 10+ years of experience in Business Analysis, I believe in using data science to make precise and informed decisions to supercharge business performance. My areas of interest include NLP, Neural Networks & Deep Learning, and of course Machine Learning.
Arjun Singh Yadav
Arjun Singh Yadav
Arjun received his Bachelors Degree in Mechatronics Engineering from SRM University in India. Soon after which he competed in DARPA to build a autonomous vehicle to help blind and disabled where he used Python based algorithm to learn from Sensor's Data and machine Vision to feed driving patterns to the vehicle. Arjun has moved further into understanding Machine Learning and statistics in data applications today, he is currently attending Bootcamp at NYC Data Science Academy where As an aspiring data scientist Arjun aims to apply analytics and data visualization to solve business and everyday problems. In his spare time, he enjoys practicing Jiu Jitsu and reading books.
Kamal Sandhu
Kamal Sandhu
Kamal Sandhu is a finance professional keenly interested in the potential of data science in combination with financial and management theory. He is working towards the Chartered Financial Analyst (CFA) program and the Financial Risk Manager (FRM) program. His main area of interest is in finding optimal solutions for business problems using automated data collection, analysis and prediction pipelines.

Leave a Reply

Your email address will not be published. Required fields are marked *