Is P2P lending worth it?

Based on statistics from Mintos P2P lending platform.

If you would browse the net for passive income ideas sooner or later you would read that P2P lending is a great source of passive income. Although it comes with notes that it is pretty risky as well. In this article I will share results of different investment simulations based on P2P statistics found on Mintos home page.

The stats from Mintos

  • Average annual return: 11,89%
  • Number of defaulted loans: 7,69%


Simulation – Mintos case

I ran 1 million simulated investments with a 7,69% chance that any of them will default but if they don’t go bust they give 11,89% annual return. And even though I reran the simulation couple of times (so more millions of simulated investments) I got same result:

  • Investors have the default rate of 7,7%
  • Investors annual return is -4,3% loss.

Simulation – bit worse than Mintos

For this simulation I assumed that return stays the same since most of P2P lenders offer 10% – 12% returns, but for default rates will be at the same amount.

  • 11,8% of investments default
  • Investments bring 13% loss

Simulation – Bondoras case

It is really difficult to get default rates from Bondora directly but I found some info here.

Based on that I set default rate to 24% and return rates to whooping 20,93%.

With these risky end stats we lose 32% annually but let’s check the safest option on Bondora: Risk of default 13,06% with returns averaging at 25,8%:

  • Annual return: – 3,56%


It looks like unless P2P platform provides risk free investment opportunities, they do not seem appealing except a case when loosing 4% per year is your most profitable option.

Additionally there is a credit risk of the P2P company itself. They are not around for a long time and we don’t know if they will live through next financial crisis.

After playing around with my simulator I found that we can get into profit if we manage to lower the default risk to 4%. But there are platforms that offer safe investment opportunities where the only risk You take is the risk of that platform going bust. One of such platforms is (can’t do simulation for it since they don’t publish default rates for loans in their platform) but there must be others as well.

Just wanted to add one final conclusion: The high number of recommendations to invest in P2P lending can mean only one thing – P2P lending companies have some good affiliate rates. You can check the affiliate program whenever somebody or some article suggests You should invest in P2P lending. 

Stay safe! 🙂

Visualizing portfolio historical performance with Python

Hey there!

In this article we are going to check out how to examine and visualize the historical performance of a portfolio. This can be any portfolio but since we scraped prices for SEB funds in our previous article (Web scraping investment fund prices with Python) we will use those prices here as well. If You haven’t read that article it would be good to at least check out the full code used there as we will import it and use one variable from it. Also by doing so we will initiate the whole thing to run and update the prices for us here.

The set up

This is how we will do this:

  1. Define functions for gathering historical prices for our portfolio allocation
    • Gather prices per chosen funds
    • Calculate total portfolio price based on weights in allocation
  2. Define function that would get all the ISIN codes of our funds and run the price update
  3. Put together our portfolio return, SEB fund average returns for performance comparison

Pretty easy but I will still guide You through it. No worries about that.

The walk-through

The imports

We will need 3 things to be imported and one optional:

  • pandas – for data cleaning and manipulation.
  • matplotlib.pyplot for data visualization.
  • datetime to get today’s date.


We could just read MS Excel in stead but I wanted to have this file updating the previous one. I know this could be done in a simpler way but whatever. What’s done, done.

Also we could say that matplotlib.pyplot is optional since it is possible the get the charts form pandas DataFrames as well, but whenever I run it from terminal I see only matplotlib and no pandas charts so I would still trust the main chart to matplotlib.

Function for loading prices form MS Excel

This function will read excel file we previously saved and retrieve an aggregated portfolio price based on given allocation and prices from sheet ‘DATA’. It requires ‘allocation’ as a pandas DataFrame with fund ISIN codes and allocation weights for our current portfolio.

In order to get that nice (an single) list of of prices we will need to go through these steps:

1. Create a list of column names (ISIN codes) for allocation

Initially I believed that I can just go with allocation.index as a list for filtering, but something in pandas proved me wrong. The issue was that ‘allocation‘ had ISIN codes that were not saved in excel. Reminder: We dropped funds that didn’t have at least 1000 prices/days when gathering prices.

Why we included ISINs for which we don’t have prices in allocation? It happens when we want to take average fund performance by giving each ISIN equal weights. This shouldn’t be a problem if You would use the same ISIN codes that we have for our portfolio.

Quick workaround – looping though the all ISIN codes saved in prices and then through allocation.index putting ISINs that matched in separate list – filterColumn.

2. Filter and adjust prices according to given allocation

This task calls for another loop!

But if You think of it we could have done it all together in previous loop… All the bright ideas come to me too late. I am not willing to change this setup (since it works well).

Anyway. We will do the following:

  1. Basic double-loop operation on fund prices DataFrame index and filteredColumn ISINS
  2. Append a temp list p for our portfolio price.
  3. Drop NA values everywhere to have only clean data

Here the parameter errors=’coerce’ means that if there would happen to be an error while converting DATE to date then it would deliver NA which we would drop on sight.

3. Group by date and deliver some stats

Lastly we want to use these neat and clean price dates as index as they are unique (at least should be) so we will use function group that would assign whatever we are grouping on as an index.

If for any reason there would be duplicate dates we would sum up the prices of our portfolio anyway and escape an error. But so fare there haven’t been any duplicates..

And as a cherry on the top, and this is optional, we will get some statistics on our prices. We could just go with pd.DataFrame.describe() but we don’t need all of that stuff. We will settle with STD, Mean return and Sharpe ratio:

It looks a bit ugly but I believe that using an extra variable when it’s not needed just puts some extra weight on memory so I went with raw portfolio.pct_change().dropna() in stead of assigning it to a variable.

To be perfectly honest with You, I will drop these print() lines of code as soon as this article gets published because I have some bigger plans for this in near future and I would prefer it not doing non-obligatory calculations. Meanwhile the current output is:

Before You continue I just wanted to say that .dropna() in case with pct_change() is not to drop erroneous entries. It is for the first record that’s always empty when You use .pct_change().

That being said, here’s the full code getPortPrices function:

Function for initiating SEBscraper and getting ISIN codes

This is a short one – it imports our previous project SEBscraper as seb and initiates price update from which we take DataFrame funds. As we noticed in previous function You have to be careful with this one because we didn’t take prices for all funds that we got from overview page. We only took the ones that had significant history (more than 1000 prices)

Putting everything together

Finally all the pre-requirements have been fulfilled and we can get our hands wet with the data.

we will start with the print() to let ourselves know that we are starting the process and initialize the price update. After prices are updated and we got the latest list of fund ISIN codes that SEB offers, we will read the prices out form the Excel file where they are saved.

We want to process them in two ways:

  1. Get our current protfolio allocation
  2. Get an average of SEB investment fund performance

Well, we will calculate the performance but in order to get to that, we will assign equal values to all funds. We can also add other allocations that we might like. I have prepared the split in BONDS, EQUITY and MIX based on their names. Here’s a reminder how it looks:

If You wanted to comment on my chosen allocation that it looks like I’m 60+ then I agree. Way too conservative and I expect to see it in charts as well.

Here’s where we start:

So now we have everything ready in DataFrame totalPrices. The code gives the following output:

AS we can see our bench a.k.a SEB investment fund average performance outperforms our allocation in returns and Sharpe ratio and our allocation beats it in capital preservation. But since I am not ancient, I would prefer to be taking more risks for bigger winnings…but more on choosing optimal allocations in next article.

The plot

So we get to the fun part – data visualization. First thing I want to stress here is that we are dealing with dates for which we have prices for. So I would assume that all of these are working days. In other words, picking last 360 days would not be equal to 1 year. In this code I have made an assumption that each month has equal amount of business days = 21. And that means that quarterly data is 63 days and annual – 250. This will be relevant real soon.

And by soon I meant right now. We will check standard deviation (STD) for past 250 days and also check price movements for the same period with moving average for a month.

For the STD that represents risk we see that our portfolio is 10x safer and more stable than the average:

But it has it’s trade-off in average gains:

At some points we would of course enjoy having the portfolio that we have here. Like, in all those price drops below 0, but in general we can see that average SEB fund enjoyed higher rise in price than our

allocation. Also what we can notice here is the fact that 250 days back is not equal one year. More like 3/4 of a year. We can note this in further work that’ s needed to be done. In this case for comparison reasons it’s fine as long as all date ranges are the same for single comparison.

Let’s take a peek at our price movement frequency in histogram: it’s not all bad if we do not take into account inflation and stuff. I mean it’s sort of mostly positive…right next to zero. The fact that price changes were mostly upwards is actually really good if You thin about it. It just sucks that those are not significantly large movements upwards.

Let’s move on and check the price movements for last 63 dates and also add some extra stats with it:

We also get the following output:

Nothing to be proud of since we are in the read for last ~3 months. This just keeps sending us signals that there is a better allocation among these funds. If we assume we have a choice among only these funds we can be sure that we can find a more optimal allocation than the one we have currently. And this assumption is true for me since I have a life insurance linked to these funds and I have the privilege of setting the allocation. (another reason why we will definitely discuss asset allocation optimization)

One last plot to view is a scatter plot for STD/returns to see how well risk and return go hand in hand. Obviously we want to have the least risk for the most returns. But let’s see if this will be the case. From previous metrics we can already anticipate low risk scores and not so good looking returns.:

You might guess which one is our portfolio in there. It’s a clear indicator that we need to reallocate the portfolio for better performance.

Saving the summary figures to MS Excel file

So now we can save our results in an Excel spread sheets and close this topic so we can move on to the optimization. 🙂

As You might have noticed I added bit more periods to the summary so it has quarterly, annual and 5 year figures. Or at least what we consider as equivalents of those time periods within this project.

Full code:

As always, here’s a full code for this project (go to web scraping fund prices article to get the code for SEBscraper):

Web scraping investment fund prices with Python

In this article we will look at a relatively short Python project that would read the names of all SEB investment funds, ISIN codes and their historical prices. The work if we would do it our selves is easy and we could do it manually but I personally got bored at the fourth fund and it’s prices. So let’s do it all automatically with the power of Python!

Why SEB investment funds?

I could think of only 2 reasons at the moment. In this case we will do this for educational purposes but in general we could do this if:

  • we want to invest and of course analyze the time series data but there are little to no money that we could spare for investing. But we are SEB’s clients. Bank clients usually purchase the shares owned by the bank at a reduced commission and can also invest relatively insignificant amounts.
  • We want to practice web scraping on some basic pages. Believe me, in finance it’s hard to find web pages that are more basic than big old banks that are around for century. They never had a necessity to upgrade their web presence. (up until now…)

When I started this article I was still working at SEB and from those times I still have life insurance for which I have to determine the allocation of my money to their funds. In the future we can expect the continuation of this topic in connection with analysis of the time series data as SEB does not offer an opportunity to look at the historical performance of your portfolio themselves. This task is left to me (and You). If you have a similar problem with SEB or another bank this series of articles might be not only educational but even useful.

The Code

For those who don’t care about my monolog around the code here’s the complete code:

But those who want to view this code in separate parts and continue to read my monologue I chose to split this fairly short code in 3 steps:

  • Acquisition of fund name and ISIN code
  • Reading each individual funds’ pages and finding prices
  • Save prices in DataFrame

Acquisition of fund name and ISIN code

For those who wonder what ISIN is:

The International Securities Identification Number (ISIN) is a code that uniquely identifies a specific securities issue. The organization that allocates ISINs in any particular country is the country’s respective National Numbering Agency (NNA)


Source: Investopedia

So it’s clear – ISIN is a unique sequrity reference made up of two letters (based on country) and 10 digits. Total of 12 characters combination. In our case it also has an added value – it is included in the HTML addresses of the SEB fund price pages. In other words, we will get them, because they are vital for us to successfully fulfill our plan but on this topic we will return to the next section on reading individual pages of the funds.

How SEB investment fund pages are built/structured?

Open SEB Investment Fund price page and right-click to select Inspect or whatever web browser you are using. You will see the website design from inside. I use Google Chrome so in my case I right-click and Inspect. You should open something like this (on the right side):

I am not ready to run an introduction to HTML course here so I will describe it in my words. Hopefully some would describe it as plain English.

Here you are looking for an element that starts with <table> which obviously marks some table element of the page. Each

to </ table> consists of lines which in turn are marked as and between tr marks we expect to see column separators </ tr> and each column has in turn column separators


containing data inside </ td>. Try to expand these elements on your own and shortly You will see that we are interested in second

element of each

that contains information about SEB Funds. The records we are looking for should look something like this:SEB aktīvais plānsAs you can see here we have both the ISIN and fund name. So you need to find every

entry containing these two elements but you must first read the page with Python:

So here we use 2 modules: requests and BeautifulSoup from bs4. We will use requests to get the data in our Python and save to variable fundList. Then we want to process the HTML object into more readable format so we take HTML text from the fundList and parse it through BeautifulSoup. Don’t forget to add .text or .content to your requests response variable when passing it to BeautifulSoup. I must admit that this part is hard to mess up.

BeautifulSoup allows you to use a function like .find_all (‘tag‘). This method lets us select all HTML elements/tags that match whatever tag we have provided it with. The normal .find (‘keyword‘) also works but it stops at the first element. So lets write .find_all (‘tr’) and .find_all (‘td’), because these two elements are the only ones that interest us. We don’t need .find_all(‘table’) since every table is built with



The isin list should be defined early otherwise we will have an error as soon as we use the .append () method to add new data for a fund to the list.

Saving information to DataFrame

It is more convenient to work with 2D data if it’s stored in a DataFrame rather than lists or arrays. DataFrames provide various built-in operations that we can perform on given data tables:

Reading each individual funds’ pages and finding prices

By clicking on the fund or in the element of the section of the above, we see that the fonts addresses have a simple structure. It is enough to replace the ISIN code with the variable from the isin [1] and we will get the HTML address of individual fund.

Structure: Where to find prices?

Same as before we need to find the part of the HTML code that interests us. It is fairly fortunate that these pages are not designed according to the latest fashion with jQuery requests or flash. Then it would be harder scrape them… at least for me.

When looking at the code inside the Python you can use the soup.prettify () method which will indent the text block making it look more structured but we can of course also inspect the page in our browser. If you quickly scroll through the code You will realize that this time we will be interested in a place that starts with:

Bunch of rows with data in middle and it will end with this:

Once you locate the information you need to define starting point and ending point. I would definitely not recommend slicing from heading 1 to heading 35 (these are random numbers just to show an example). Maybe some of the readers could recommend me a more efficient method of handling this but I will use the .find () text method to locate starting/ending points of each price per date.

So let’s stat reading:

If you read this article in order to learn how to download information from websites with Python I have to warn you that it takes most of the time to clean up your data or to search for the right cleaning solution. I had to play around a bit until I got this part done right…or at least working. I wanted to find a more fancy solution for this data cleaning issue but at the end of the day I did’t care how it looks as long as it works.

As you can see I use the txt variable to temporarily save the piece of text that soup.find_all (‘script’) delivered. The main task of txt is to clean the text as much as possible so that we don’t have to do much of a cleaning later on. Here we will use the simple search function to find start and end points based on what we saw in HTML code. Main task is to find correct key words or signs that are unique to these