Sanjiv Das

William and Janice Terry Professor of Finance, Leavey School of Business, teaches Credit Risk Modelling, elective seminar

What is your research focus?

My focus is credit models, exactly what I taught here. Right now, I am working a lot on restructuring mortgage debt, as presented at the research workshop. The focus of that has really been on designing a model for optimal restructuring of distressed debt. One interesting result of the model, forthcoming in the Journal of Financial and Quantitative Analysis, is that you should just write down part of the principal rather than reduce the rate of the loan, in order to give relief to the borrower, while optimising expected loan value for the lender. This was a surprising outcome and I discussed these results with the Federal Deposit Insurance Corporation and presented this to Citibank. The United States government changed its policy to allow principal write-downs (the HAMP-PRA scheme) in home loan modifications. So this is going on now. This was a most interesting research project because it was not just about the finance or the maths; I learnt a lot about the behaviour of people and the political process. No other research project I have done had all these dimensions.

I also work on topics closer to the EDHEC-Risk Institute expertise like portfolio optimisation – currently I am working on optimising portfolios with options, and on pension planning, because in an environment of underfunding, one really needs to think about the correct prescriptions.

I also work on applications of text mining, where you take a large body of unstructured text and try to pull out structured data from it. One project I have done with IBM was to take all the loan documents that banks file with the SEC in relation to interbank lending – these are hundred page documents – and then have software read the documents, take out all the parameters of the loans and construct a database. Then we try to put it all together to see how critical each bank is to the interbank lending system and we score them all allowing us to give a ranking for the systemic risk for each bank, assessing the chance that a particular bank will bring down the lending system.

I also work on taking a behavioural approach and mapping it to the Markowitz mean-variance approach. These ideas were published in the Journal of Financial and Quantitative Analysis two years ago; it was an interesting paper, because I did it with Harry Markowitz, father of portfolio theory and Meir Statman, pioneer of behavioural finance, and we found this reconcilement between the two approaches — [The model integrates mean-variance portfolio theory and behavioural portfolio theory into a new mental accounting framework and demonstrates a mathematical equivalence between the two theories and risk management using value at risk. The new framework links investor consumption goals and portfolio production.]. I also do some computer science research, such as high performance computing for finance applications.

You mentioned the course, what did you go over in this elective seminar?

The objective of the course was to cover the entire range of default models in fifteen hours. I broke it down into five segments. The first segment presents the products that are driven by the models, so participants have a basic idea of what products are used in practice and what the goals of these models are. This is really a one-hour warm up. Then I spent a decent amount of time on credit scoring models – how do you evaluate them, what are the metrics people use. Then, we moved on to looking at two broad classes of models: structural models, which have been around for many years in academe but now are also widely used in practice; and then, in the following segment, reduced form models. The final segment was on correlated default, understanding and modelling the things that caused the crisis with securities such as collateralised debt obligations; we did not have to focus too much on the products because the good thing was that all the people in the class actually knew those products so we were able to go directly to PhD-level issues. I ended the seminar with combined hybrid models – there are interesting mathematical issues in the approach, plus from a practical point of view, the question of computational algorithms. This is something PhD students need to know because simply knowing the model does not help. You need to know computational tricks; otherwise you will spend three days over something that can be done in one hour.

How was it teaching these PhD students?

It was an interesting group to teach because they knew a lot of things that the usual PhD student does not know. I admit I had failed to realise this before coming to class. If unexpected, it was a pleasant surprise to have more senior and experienced students who know the products really well. In many ways it is easier to teach such people because they are highly motivated and have context. Besides, all of them had chosen the course which makes a difference; sometimes in a traditional PhD programme, you have students sitting in your class who plan on working in a different area and just want to get out of it as fast as possible. I also guess that the kind of research that EDHEC-Risk Institute contributes to is attracting students who are more quantitatively inclined than the average student from the average PhD programme in the United States, which makes the class more homogenous in terms of skill sets and interests in a way that is very useful for a course like mine.

What advice could you give to PhD students looking to identify a suitable topic for their research work?

Actually some of the students did talk to me about it already. The first piece of advice I give a lot of students is to not be in a big rush to get the data and do something with it. You have to have a really good question first. Otherwise, no matter how good your data is, your work is not going to be interesting and you will not publish a good paper. The second thing is, you should always theoretically analyse the question completely first because that exercise will take you to the right empirical specification for the data work that you want to do. There should be a good theory because once you have a theory, then you can derive a setup that tells you that if this assumption holds, then you should see this in the data, etc. “No theory, no paper.” Another piece of advice is not to be in a big rush to find a topic because you are going to be stuck with it for the duration of the PhD and possibly a few more years. You do not want sub-optimally close on something that you will not enjoy. Additional advice is to start writing right away, do not wait; the process of writing helps you structure your thoughts better, structure your theory better, structure the empirical work better. Of course you should always be ready to throw things away. You should write early rather than late and keep writing. That is part of the life of a researcher; writing a lot is a good thing and throwing away a lot is also a good thing. And you know, you keep learning through that process, and that is important. As I told the class, you should not expect to learn everything you need to know in a PhD programme; that is never going to happen. There were countless things I learnt after my PhD because I needed to solve a problem, I needed a technique that I did not know and I had to learn it for myself; you should be learning how to learn instead of thinking that everything is going to happen in the PhD programme. That is the purpose of the programme.