Saturday, January 26, 2013

Risk Management: Through the eyes of a SP engineers!!


This post is motivated by the September edition of IEEE Signal Processing Magazine. We all must be familiar of risk management which is a topic of importance in both engineering and finance. It has been the outcome of man's desire to control the nature's unexpected phenomena. Though not fully controllable this branch of science has helped him reduce the devastation that would've been inflicted otherwise. Broadly we can state Risk management as the study and mitigation of rare events that have potentially devastating outcomes.
The problem of risk management is closely related to the reliability theory which calls on the need to formulate robust solution to real-world problems arising around us. The events addressed could be variants ranging through Gulf oil spills, global warming or terrorism act. Improved methodologies that focus on point estimation, or maximal probability events, as well as the occurrence, prediction and cost of outliers are the need of the era especially in the domains like security, defence etc. The cost we pay due to the lack of innovative research in this arenas is beyond the bounds..
The study of risk management can be branched out into two : 1. Statistical study and theory of extreme events [1].  For instance the telecommunication traffic research has its backbone on packet size distributions which does share similar properties to finance data [2].
2. The second : Modelling and predicting future distribution of losses. The derived loss-distribution function should not only reflect conditional uncertainty under various model assumptions, but also the unconditional nature. This naturally calls for the Bayesian approach which still has rudiment research done on it. Our emphasis while dealing with this domain should be on the far right tail; the extreme losses or errors where we might have to sacrifice predictive accuracy to get a better estimate of rare events.
The events occurring around us should be closely analyzed for correlation factors whereby we can design a model. Most of the events occurring strictly follow the distribution models which make our job easier. The only skill we need to develop is to look around us for parameters to catch the correlation. The correlation is a sign that induces predictability which in turn completes our conditional analysis. The tougher part of the cake is the unconditional behavior in the model, which would be a heavily intuitive study. 
Readers who complain of lack of research arenas should try tuning your minds to these problems. I am sure by now you might be dreaming of umpteen number of thesis reports in your name :)

REFRENCES
1 .S. Resnik, "Heavy Tail Phenomena : Probabilistic and Statistical modelling" New York: Springer-Verlag, 2007.
2.W.E. Leland, M.S Taqqu, W.Willinger and D.V. Wilson, "On the self similar nature of Ethernet Traffic", IEEE/ACM Transaction on Networking, Vol.2, No.1, pp: 1-15, 1994

Monday, March 26, 2012

Constrained Optimization - Single Equality Constraint

As a prerequisite for  studies on Compressed Sensing algorithms there are a lot of mathematical concepts to be studied. Most important among them being optimization technique. The optimization problem can be broadly classified as Constrained and unconstrained. Constrained optimization is a type of optimization problem where the search space is dictated by constraints. While unconstrained optimization problem have an unrestricted search space.

The constraints may be of two types of which they may be equality constraints or inequality constraints. This post aims at discussing the equality constrained optimization technique. An example of such an optimization is depicted below.

which calls for a search of an optimal value at which f(x) is minimum on the set of points x that satisfies g(x)=0.

The condition for this is totally different from that of the unconstrained optimization problem which states the solution to be

The concept of Levelset  is to be exploited to solve this problem. A levelset is a set of points on which function value is a given constant. The objective function is represented as a levelset sequence starting from the minimum possible level set. 

There should be a value of levelset for which the corresponding levelset curve of the objective function f(x) just touches the g(x) = 0. At this contact point denoted by we can draw a common tangent  to the two levelsets. The tangent would be orthogonal to both the gradients   and  drawn at  which states that they are collinear.
This has been shown in the figure below.


The gradients may be in the same direction or opposite direction depending on direction f(x) and g(x) is increasing at the point  and the optimal conditions for the two cases are as below.

Case 1: If the gradients are in same direction then the solution is found by 


Case 2: If the gradients are in the different direction the the solution is given by


The following post we shall discuss the multiple equality constrained problem. The visualization of the levelset concept comes very handy in the familiarization of these concepts.




Tuesday, February 21, 2012

Overcomplete Basis for Sparse Coding?

An overcomplete basis has the number of basis vectors greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Overcomplete bases is expected to yield better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures. Initially the technique might seem to contradict with the concept of sparseness due to the increased number of analytical coefficients. But the increased dimensions produces extra degrees of freedom which can be exploited to increase the sparsity of the set of coefficients. Thus only those coefficients that deemed significant are evident in the representation. This type of coding forms the basis for what is called the 'Holy Grail' of Audio coding called the Object coding which can be seen as a footstep towards musically intelligent acoustic coding.

Friday, December 30, 2011

Compressed Sensing: An Overview #2 (Group Testing)

Here we will consider a situation where N number of soldiers have to be tested for a particular disease using a sensitive but an expensive blood test. If blood samples from different soldiers were grouped together, the test would be sensitive enough to declare a positive if any of the soldier was infected. If there are K soldiers and where K>> then M = N measurement is to be done. If K=1 (priori) then M=log2 (N) is possible. Here global measurement is taken which is then followed by a reasoning algorithm. This was proposed by two economist during the Second World war for the screening of soldiers infected with syphilis. The proposal was to put the blood samples together as few people were likely to be infected by Syphilis, this saves tests on average. Though test was not put into practice the discussions on the subject is still being continued!! A similar principle is applied in compressive sensing.

The signal is projected onto a random signal of equal length. Each projection refers to a measurement. Each random signal used for measurement in the view of Fourier Series is a linear combination of several frequency component thus is equivalent to the extraction of information regarding several frequencies of the original signal. With adequate measurement the signal can be reconstructed using the L-1 norm optimization algorithm. The accuracy obtained by this is remarkable but is limited to a very strict condition. The signal should be sparse in terms of 'Information content'. Mathematically the number coefficients representing the signal must be small. Luckily most of the signals in nature satisfy this requirement and that is the reason why signals are compressible. 

Thursday, December 29, 2011

Compressed Sensing : An Overview #1

We shall try to get an analogy to the problem "Compressed Sensing"  through a few analogous example.

There are N buckets of gold coins where each of the coins weigh 2 grams except for a bucket which weighs 8 grams. The problem is to identify that particular bucket.

Approaching the Nyquist would be to weigh the coins from each bucket also called the point wise sampling. Otherwise we can number each bucket and accordingly take the coins with respect to the number of the buckets. Bucket 1 would give 1 coin and bucket 2 would give 2 coins up to n coins from bucket number n. 

Thus a total of 10 buckets would give us 55 coins. If all the coins were of 2 grams the total weight would be 110 grams. But if x is the actual weight the defective bucket would be (110-x)/(2-1.8). Thus just one measurement and the bucket number is retrieved. There is a critical assumption made to achieve the solution which is we have only one defective bucket. In fact this is the term 'Sparsity Prior'. Thus a linear measurement server the purpose. One information to retrieve thus a linear combination. More information would call for different linear combinations. This in fact is what is called as Group Testing

Monday, December 26, 2011

Nuit Blanche: Bilateral Random Projections

The blog below would be very helpful to update yourself with the Spars Processing in the signal processing applications.
Nuit Blanche: Bilateral Random Projections: Reminded of it thanks to Sergey , here is the simple but powerful: Bilateral Random Projections by Tianyi Zhou , Dacheng Tao . The a...

Sparse Processing: Time to Unlearn Shannon-Nyquist Theorem?

 Signal Processing: An ever dynamic domain since its evolution keeps changing its course day after day. With the increasing number of functions being pushed to sophisticated software algorithms there is very little room for the circuit level processing. This calls for a high degree of visualization and mathematical thought for the budding up engineers.

 The inevitable technique which when one looks forward for sampling the natural signals to the discrete domain was the all famous Shannon-Nyquist criterion. This calls for a very ideal condition of the band limited signal that would never in the universe ever. As the expansion of the frequency domain would call for a very high compression of the time domain and vice-versa which is the reason for all the errors arising due to the approximation of signals to band limited signals. Though we sample and transform the entire spectra of the signal we see that the required information is concentrated to certain hot-spots in the transformed signal. This is the feature that is well exploited in the convectional compressing technique.

  "Compressive sampling" has emerged. By using nonlinear recovery algorithms, super-resolved signals and images can be reconstructed from what appears to be highly incomplete data. Compressive sampling shows us how data compression can be implicitly incorporated into the data acquisition process, a gives us a new vantage point for a diverse set of applications including accelerated tomographic imaging, analog-to-digital conversion, and digital photography. 

 The recently discussed Sparse technology defines every signal in the universe as Sparsely distributed and aims at the acquisition of these sparsely distributed elements from the signal. Thus saving the energy, memory and also the computational complexity. Interestingly to be well versed in Sparse processing the only mathematical tool is solving the linear system of equation. With a mathematical mind and proper understanding of the L1 - Magic the entire compressive sensing problems can be computed easily. All it requires is a team of efficient mathematicians and engineers with proper visualization to the nature.

 The optimization techniques through this method has proved much efficient and faster than the convectional method. The mathematical proof for the Compressive sensing was provided by a Field Medalist " Terence Tao.

 This arena calls for a wide scope of research as various techniques like Inpainiting, Deionisation etc. which are otherwise complex could be solved by just the usual differential equation of heat.


We shall discuss in much more detail in the upcoming days.

Links that might help you.