Tuesday, November 27, 2018

periodic trends - Comparing radii in lithium, beryllium, magnesium, aluminium and sodium ions


Apparently the of last four, $\ce{Mg^2+}$ is closest in radius to $\ce{Li+}$. Is this true, and if so, why would a whole larger shell ($\ce{Mg^2+}$) be closer in radius to $\ce{Li+}$ than its isoelectronic neighbor ($\ce{Be^2+}$) with just one extra proton?




organic chemistry - How to change CO₂ to a less polluting gas?


I want to change $\ce{CO2}$ coming out of the tailpipe of a car to some other compound that is considerably less harmful to the environment with a filter of some sort.


What are some ways that this is possible?



$\ce{CO2}$ to carbon and oxygen?




Sunday, November 25, 2018

organic chemistry - How would substitution reaction results differ between reacting halides with a tertiary bromoalkane vs reacting it with a primary bromoalkane?


Say, you conducted two separate reactions; in one, you combined $\mathrm{50~mmol}$ saturated $\ce{KCl}$ solution with $\mathrm{10~mmol}$ 1-bromooctane (assume heating), and in the other you combined $\mathrm{10~mmol}$ saturated $\ce{KI}$ solution with $\mathrm{10~mmol}$ 1-bromoctane. In both cases $\mathrm{0.5~mmol}$ hexadecyltributylphosphonium bromide was used as a catalyst.



So, due to relative charge densities and the fact that the electrophillic carbon is primary, you would have a low yield of 1-chloroocatane vs starting material in reaction 1 and you would have a high yield of 1-iodooctane vs starting material in reaction 2, both cases obtained via SN2 reaction.


Now, suppose you would have repeated the same two reactions, only this time you would have reacted the $\ce{KCl}$ and $\ce{KI}$ with 2-bromo, 2-methyloctane, how exactly would the results differ from the results of the previous two reactions, or would they have been the same?


I know that the reaction would have no "choice" but to react via the SN1 (or perhaps E1) route, as the electrophillic carbon is tertiary and thus very sterically hindered. Also, I know that $\ce{I-}$ ions are good nucleophiles, but that doesn't matter in this case, does it?


Perhaps this reaction would go slower than the previous, owing to the fact that it requires the $\ce{Br-}$ to dissociate, and therefore there would be lower yields of both haloalkane products. But then again, wouldn't it go faster owing to the fact that it is in a polar protic solvent, and that the formed carbocation would be tertiary?




filters - Is up-sampling prior to cross-correlation useless?



Consider a simple case where two signals from two different sensors are cross-correlated, and the time-delay-of-arrival computed from the absissa of the peak of their cross-correlation function.


Now let us further assume that due to the dimensionality constraints of both antennas and the constraints on maximum possible sampling rate, the maximum attainable delay possible is $D$, corresponding to 10 samples.


The problem:


Because of those constraints, your computed delay may vary from any integer value between 0 and 10 samples, that is: $0 \le D \le 10$. This is problematic because what I really want is fractional-delay discrimination of the delay between the two signals impinging on my antennas, and changing the dimensions or the sampling rate are not an option.


Some thoughts:




  • Naturally, the first thing I think of for this case is upsampling the signals before performing a cross-correlation. However I think this is 'cheating' somehow, because I am not really adding any new information into the system.





  • I do not understand how upsampling is not 'cheating' in a sense. Yes, we are reconstructing our signal based on its currently observed frequency information, but how does this give one knowledge of where a signal truly started between, say, $D=7$ and $D=8$? Where was this information contained in the original signal that determined that the true fractional-delay start of the signal was actually at $D=7.751$?




The question(s):




  • Is this truly 'cheating'?



    • If not, then where is this new 'information' coming from?

    • If yes, then what other options are available for estimating fractional-delay times?





  • I am aware of upsampling the result of the cross-correlation, in an attempt to garner sub-sample answers to the delay, but is this too not also a form of 'cheating'? Why is it different from upsampling prior to the cross-correlation?




If indeed it is the case that the upsampling is not 'cheating', then why would we ever need to increase our sampling rate? (Isnt having a higher sampling rate always better in a sense than interpolating a low sampled signal?)


It would seem then that we could just sample at a very low rate and interpolate as much as we want. Would this then not make increasing the sample rate 'useless' in light of simply interpolating a signal to our heart's desire? I realize that interpolation takes computational time and simply starting with a higher sample rate would not, but is that then the only reason?


Thanks.



Answer




It's not cheating, and it's also not adding any new information. What you are doing is the same thing that any upsampling LPF is doing- adding zeros and then reconstructing the waveform with the already known frequency information. Thus, there is no new information, but there is still finer time resolution.


Upsampling the result is similar- no new information but finer time resolution. You can do something very similar through quadratic interpolation.


All of these methods- upsampling and polynomial interpolation- get their information on where the fractional peak is from both the peak itself and its neighbors. A quick pictorial example. Balanced Peak


The blue line in the picture above is my simulated cross-correlation data (though it could be any result, not just a cross-correlation). It is what I call a "balanced" peak because the neighbors are symmetric. As you might expect, the resulting quadratic interpolation (red line) indicates that the true peak is at zero.


The image below, on the other hand, shows an unbalanced peak. Please note that nothing has changed in the result except for the values of the two nearest neighbors. This causes the interpolator, though, to shift its estimate of the fractional peak. enter image description here


A nifty side benefit of these methods (polynomial interpolation and upsampling) is that it also gives you an estimate of the true peak value, though we are usually more interested in the location.



If indeed it is the case that the upsampling is not 'cheating', then why would we ever need to increase our sampling rate?



To satisfy the Nyquist criterion.




Isn't having a higher sampling rate always better in a sense than interpolating a low sampled signal?



No. From a theoretical standpoint, as long as the Nyquist criterion is satisfied it doesn't matter what the sample rate is. From a practical standpoint you generally go with as low a sample rate as you can get away with to reduce the storage requirements and computational load, which in turn reduces the resources that are needed and power consumption.


adaptive algorithms - Kalman filter in practice



I have read the description of the Kalman filter, but I am not clear on how it comes together in practice. It appears to be primarily targeted at mechanical or electrical systems since it wants linear state transitions and that it is not useful for anomaly detection or locating state transitions for the same reason (it wants linear state transitions), is that correct? In practice, how does one typically find the components that are expected to be known in advance to use a Kalman filter. I have listed the components, please correct me if my understanding of what needs to be known in advance is incorrect.


I believe these do not need to be known "in advance":



  • Process noise $\mathbf w$

  • Observation noise $\mathbf v$

  • Actual state $\mathbf x$ (this is what the Kalman filter tries to estimate)


I believe these need to be known "in advance" to use a Kalman filter:



  • The linear state transition model which we apply to $\mathbf x$ (we need to know this in advance, so our states must be governed by known laws, i.e. the Kalman filter is useful for correcting measurements when the transition from one state to another is well understood and deterministic up to a bit of noise - it is not an anomaly finder or a tool to find random state changes)


  • Control vector $\mathbf u$

  • Control input model which is applied to control vector $\mathbf u$ (we need to know this in advance, so to use a Kalman filter we also need to know in advance how our controls values affect the model, up to at most some gaussian noise, and the effect needs to be linear)

  • Covariance $\mathbf Q$ of the process noise (which appears to be time dependent in the wikipedia article, i.e. it depends on the time $k$) - it appears we need to know this in advance and over time, I assume in practice it is taken as being constant?

  • A (linear) observation model $\mathbf H$

  • Covariance $\mathbf R$ (which appears to also be time dependent in the wikipedia article) - similar issues to $\mathbf Q$


P.S. And yes I know many of these depend on time, I just dropped all the subscript clutter. Feel free to imagine small letter $k$ to the right and down from each variable name if you would like to.



Answer



For some context, let's go back to the Kalman Filter equations:


$\mathbf{x}(k+1) = \mathbf{F}(k) \mathbf{x}(k) + \mathbf{G}(k) \mathbf{u}(k) + \mathbf{w}(k) \\ \mathbf{z}(k) = \mathbf{H}(k) \mathbf{x}(k) + \mathbf{v}(k)$.



In short, for a plain vanilla KF:


$\mathbf{F}(k)$ must be fully defined. This comes straight from the differential equations of the system. If not, you have a dual estimation problem (i.e. estimate both the state and the system model). If you don't have differential equations of the system, then a KF isn't for you!


$\mathbf{x}(k)$ is, by definition, unknowable. After all, if you knew it, it wouldn't be an estimation problem!


The control vector $\mathbf{u}(k)$ must be fully defined. Without additional system modelling, the only uncertainty on the control vector may be AWGN, which may be incorporated into the process noise. Known matrix $\mathbf{G}(k)$ relates the control input to the states - for example, how aileron movement affects the roll of an aircraft. This is mathematically modelled as part of the KF development.


The system process noise $\mathbf{w}(k)$ is also, by definition, unknowable (since it is random noise!). However, the statistics of the noise must be known which, for a plain vanilla KF, must be zero mean AWGN with known covariance $\mathbf{Q}(k)$. Sometimes, the covariance of the noise may change between samples, but in many cases it is fixed and therefore $\mathbf{Q}$ is a constant. In some instances, this will be known, but in many instances, this will be "tuned" during system development.


Observations are a similar story. The matrix that relates your measurements to the states $\mathbf{H}(k)$ must be fully defined. Your measurements $\mathbf{z}(k)$ are also known because that's the reading from your sensors!


The sensor measurements, however, are corrupted by AWGN $\mathbf{v}(k)$, which, being random noise, is by definition unknown. The statistics of the noise must be known, which is zero mean with covariance $\mathbf{R}(k)$. Once again, the covariance may change with time, but for many applications, it is a fixed value. Often, your sensors will have known noise characteristics from the datasheet. Otherwise, it's not too hard to determine the mean and variance of your sensors that you need to use. Yes, this can also be "tuned" empirically.


There are a huge number of "tricks" that can be done to work around the restrictions in a plain vanilla KF, but these are far beyond the scope of this question.




Afterthought:



Whilst googling for "Kalman Filter" results in a million hits, there are a couple of things that I think are worth looking at. The wikipedia page is a too cluttered to learn from effectively :(


On AVR Freaks, there is an "equation free" intro to the Kalman Filter that I wrote some time ago to try to introduce where it is used for real.


If you're not afraid of maths, there are several books worth reading that are at the senior undergraduate/early postgraduate level. Try either Brown and Hwang which includes all the theory and plenty of example systems. The other that comes highly recommended but I have not read is Gelb, which has the distinct advantage of being cheap!


ozone - Ozonolysis Reaction


Why does the starting compound not need an -OH group? I thought ozonolysis only produced carbonyls?


enter image description here




Saturday, November 24, 2018

homework - Help balancing reaction between Iron (II) chloride and potassium phosphate


Very rusty with chemistry and looking to review some basics. Came across the following:



Iron(II) chloride and potassium phosphate react. Write and balance the molecular, ionic, and net ionic equations.



What I have so far (guessing, basically):


$$\ce{FeCl2 + K(H2PO4) -> KCl + Fe(H2PO4)2}$$ [still need to balance but correct right hand side?]



I can't remember the exact rules for forming the right hand side. I just put this in place to match ion charges. What prevents me from writing the right hand side as something else entirely (not sure what but I'm asking, "well, couldn't I figure something out if I break everything apart and re-arrange into another option?").


EDIT: Additional Question Below


Ok, so assume the above reaction was balanced (it's not) and valid. Then, my question is does the reaction go both ways ($\ce{<-->}$)? If so, when/why/how? I'm trying to think back many years ago and it has something to do with "reactivity" and the row of the elements on the periodic table, correct? I'm going to look into it more myself but if someone has a quick "30 second" answer I'm sure it'd be of help to me!


[Assuming this is OK and valid in the forward direction. Again, needs to be balanced.]


$$\ce{FeCl2 + K(H2PO4) -> KCl + Fe(H2PO4)2}$$


[Is this ever ok/possible?]


$$\ce{KCl + Fe(H2PO4)2 -> FeCl2 + K(H2PO4)}$$




periodic trends - Comparing radii in lithium, beryllium, magnesium, aluminium and sodium ions

Apparently the of last four, $\ce{Mg^2+}$ is closest in radius to $\ce{Li+}$. Is this true, and if so, why would a whole larger shell ($\ce{...