Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing.
It only takes a minute to sign up. Connect and share knowledge within a single location that is structured and easy to search. Or is there any common property or similarity? Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.
Create a free Team What is Teams? Learn more. Asked 2 years, 4 months ago. Active 4 months ago. Viewed times. Improve this question. View 8 excerpts, cites methods, background and results. In place computation of the Hadamard transform in cal-sal order. Abstract This short communication describes a computational algorithm for the Hadamard transform providing the transformed coefficients in cal-sal order.
This particular rearrangement of coefficients … Expand. Dyadic symmetry and Walsh matrices. A unified matrix treatment which is defined for binary Walsh matrices is presented. This unified treatment, based on the concept of dyadic symmetry, defines Walsh matrices of different orderings … Expand. Cal-sal Walsh-Hadamard transform. Walsh-Hadamard matrices are rearranged such that the first half of the rows represents cal functions in increasing order of sequency whereas the second half represents sal functions in decreasing … Expand.
View 1 excerpt, cites methods. In Adam Marcus, Daniel Spielman and Nikhil Srivastava used random vectors to prove a key discrepancy theorem and in so doing gave a positive answer to the long-standing Kadison-Singer Problem. View 3 excerpts, cites background. View 2 excerpts, cites methods. Algorithm and architecture for Gray code ordered fast Walsh transform. The generation of two new Walsh transforms in Gray code orderings from the straight binary code is shown.
A recursive algorithm for Gray-code-ordered Walsh transforms is based on a novel operator … Expand. Encrypting by Random Rotations. This paper gives some well-known, little known, and new results on the problem of generating random elements in groups, with particular emphasis on applications to cryptography.
The groups of … Expand. Then, we obtain a point Jacket-Haar matrix with the inverse matrix given by. In a word, based on Theorems 3 and 4 , we can derive the Jacket-Haar transform matrix of any sizes in the recursive and alternate way. When is odd, we can choose Theorem 3 to derive the -point Jacket-Haar transform matrix from the -point Jacket-Haar transform. While is even, we can choose Theorem 4 to get the -point Jacket-Haar transform matrix from the -point Jacket-Haar transform.
For example, when , the value of is 4 and then Theorem 3 is chosen. It means the 9-point Jacket-Haar matrix can be derived from the 5-point Jacket-Haar matrix. Next, the 5-point Jacket-Haar matrix can be gotten from the 3-point Jacket-Haar matrix with Theorem 3.
In the end, the 3-point Jacket-Haar matrix can be constructed from the 2-point Jacket-Haar matrix by using Theorem 3. Therefore, the 9-point Jacket-Haar matrix can be ultimately obtained from the 2-point Jacket-Haar matrices.
In Table 1 we generate the -point Jacket-Haar matrices with size. In fact, Jacket-Haar transform matrix of any sizes ultimately originates from the 2-point Jacket-Haar transform matrices.
Based on the -point Jacket-Haar transform matrices, the Jacket-Haar transform matrices of any sizes can be conveniently and elegantly constructed alternately and recursively by applying Theorems 3 and 4. Consequently, certain available sparse Jacket-Haar transform matrices can be theoretically derived.
Corollary 5. There exists an arbitrary-length sparse Jacket-Haar transform matrix with a maximum of two nonzero elements in each of the rows. In addition, there exists an arbitrary-length Jacket-Haar transform matrix, and the transpose of its inverse matrix is also a Jacket-Haar transform matrix with the following constraints:.
The inductive method will be used for proving this corollary. At first, it is obvious that all the 2-point Jacket-Haar transform matrices meet the prescribed request. At the next step, there exist two cases based on which theorem has adopted generating a relatively higher order adjacent Jacket-Haar transform matrix. For the first case, suppose that is a sparse -point Jacket-Haar transform matrix. For the two-point Jacket-Haar transform matrices in the form described in 1 or 2 , the -point Jacket-Haar transform matrix derived from 11 is a sparse matrix since each of the rows possesses a maximum of two nonzero elements.
For the other case, if there exists a sparse -point Jacket-Haar transform matrix with the given matrices , the -point Jacket-Haar transform matrix from 18 is also a sparse matrix. This completes the proof of one part of corollary. This kind of sparse Jacket-Haar transform matrices is interesting and available in practical applications that are provided with the priority of acceptable performances promised. For another part of the proof of this corollary, the inductive method will be similarly used.
At the first step, it is easy to check that the transpose of two-point inverse matrix in the form of 5 is also a Jacket-Haar transform matrix, and that 25 is satisfied. At the next step, based on the selected theorems for the construction of a relatively adjacent Jacket-Haar matrix, there exist two cases. For the first case with an -point Jacket-Haar transform matrix , the transpose of whose inverse matrix is also a Jacket-Haar transform matrix that satisfies the constraint in For the 2-point Jacket-Haar transform matrices , it is obvious that derived from 12 is a -point Jacket-Haar transform matrix and the transpose of whose inverse matrix is also a Jacket-Haar transform matrix.
Similarly, if is an -point Jacket-Haar transform matrix, the transpose of whose inverse matrix is also a Jacket-Haar transform matrix. For the 2-point Jacket-Haar transforms in the form described in 3 , it is straightforward that derived from 18 is a -point Jacket-Haar transform matrix and the transpose of whose inverse matrix is also a Jacket-Haar transform matrix. Therefore, the whole corollary holds and the proof is complete. It is interesting to note the Jacket-Haar transform referred to in the afore-derived corollary is similar to that of the Jacket transform.
As we all know, if is an -point Jacket transform matrix, then is also an -point Jacket transform matrix with the following constraints:. Just like the conventional Haar transform, the Jacket-Haar transform of large sizes can be performed with fast algorithms. In order to give a more clear demonstration with simple comparison, fast algorithms of both conventional Haar transform and Jacket-Haar transform are presented simultaneously.
In Figure 1 , we show the implementation structures of the conventional Haar transforms with sizes 2, 4, and , respectively. In Figure 2 , the implementation structures of the 2-point, the 3-point and the 4-point Jacket-Haar transform are presented, from which the 3-point and the 4-point Jacket-Haar transforms can be decomposed into 2 and 3 butterflies, respectively. For , we can decompose the -point Jacket-Haar transform into combination of the 2-point Jacket-Haar transforms by using the methods described as in Figure 3.
From Figures 1 to 3 , it is obvious that the structures of the fast algorithms of the original Haar transform and the proposed Jacket-Haar transform are similar. For example, the whole implementation structure of a 9-point Jacket-Haar transform matrix shown as has been illustrated in Figure 4. It is shown that there exist layers, with , , , and butterflies being included in the adjacent layers from the left to the right.
It needs arithmetic addition operations without any multiplication. Compared to the conventional Haar transform, if both categories transform matrices possess the same matrix size which is a power of , they have the same implementation layers, numbers of butterflies, and arithmetic addition operations without any multiplication operations needed, but with different numbers of bit-shift operations. When the matrix size does not equal a power of , there do not exist the corresponding Haar transform matrices.
For the proposed Jacket-Haar transform matrices, there are implementation layers and arithmetic addition operations with butterflies included, but without any multiplication operations.
The details are shown in Table 2. Definition 6. For a given square matrix , if its inverse matrix can be simply obtained from its element-wise inverse, that is, , for , where is a nonzero constant, one calls matrix a Jacket matrix; that is, and its inverse matrix is where denotes the operation of matrix transpose. Definition 7. Suppose is a Jacket matrix of order and is a Jacket-Haar matrix of order. If , one calls a generalized Jacket-Haar matrix. According to the above-mentioned definition, it is obvious that both Jacket matrix and Jacket-Haar matrix belong to generalized Jacket-Haar matrix family.
Theorem 8. Since both Jacket transform and Jacket-Haar transform can be designed with the fast algorithms, the generalized Jacket-Haar transform which is the Kronecker product of Jacket transform and Jacket-Haar transform can be composed or decomposed with the fast algorithms.
Suppose can be decomposed until and ; then where is the identity matrix. The detailed procedure can be seen in [ 6 ].
Then, by using the property of Kronecker product, can be decomposed as with Due to the fact that can be decomposed into 2-point Jacket-Haar transforms, can be decomposed or constructed with the fast algorithms. For example, if , we can construct the 3-point Jacket-Haar transform and 4-point Jacket transform as follows: Then we can construct a point generalized Jacket-Haar transform The fast algorithm is shown in Figure 5.
The proposed arbitrary-point Jacket-Haar transform can also be theoretically applied in areas where the conventional Haar transforms have been gained, such as signal analysis, image processing, OFDM, and filter design.
There are approaches for comprehensive analysis with even and odd point transforms, respectively. The length of the ECG signal in Figure 6 is , which is even and not a power of 2. It is not convenient to analyze this case by directly using the original Haar transform, but it can be efficiently analyzed by the proposed arbitrary-point Jacket-Haar transform.
In Figure 7 , we show the normalized mean square error NMSE of the reconstructed signal when using part of the coefficients of the point generalized Haar transform where is the original signal; and are the forward and inverse Jacket-Haar transform matrices. The vector preserves coefficients of and others are set to zero. The results in Figure 7 show that, with the Jacket-Haar transform, we can achieve less approximation error when using only terms to expand the ECG signal.
As shown in Figures 8 and 9 , another simulation experiment is arranged. The length of the ECG in Figure 8 is , which is odd and not a power of 2. In the future, the proposed Jacket-Haar transform will be applied to the more practical applications with the extensive comparisons. We have investigated the fast construction of the Jacket-Haar transform of the arbitrary length, which overcomes of problem of the traditional constraint that the number of the points is the power of 2.
With the proposed fast generation algorithms, the arbitrary length Jacket-Haar transform can be derived in a successive fashion. Moreover, we show the possible implementations of the fast algorithms and applications in signal processing.
On the basis of the structures of the traditional Jacket transform and the Jacket-Haar transform with any size, the fast algorithms of Jacket-Haar transform are derived for the arbitrary length. Compared to the traditional FFT and its extensions, the proposed Jacket-Haar transform is more efficient in signal reconstruction. However, more properties of the Jacket-Haar transform and its practical applications may be further investigated in the future work.
The authors declare that there is no conflict of interests regarding the publication of this paper. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Article of the Year Award: Outstanding research contributions of , as selected by our Chief Editors. Read the winning articles.
0コメント