2023-09-30

JFrame not moving

I want my JFrame to move despite disabling Decorrations.

i added a mouseListener after some googling but still didnt help.

    public static void main(String[] args) {

        try {
            UIManager.setLookAndFeel(new FlatDarkLaf());
        } catch (Exception errorDesign) {
            logError(errorDesign);
        }

        JFrame frame = new JFrame();
        frame.setBounds(1600, 400, 500, 800);
        frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);

        frame.setUndecorated(true);
        frame.setVisible(true);

        frame.addMouseListener(new MouseAdapter() {
            private Point mouseOffset;

            @Override
            public void mousePressed(MouseEvent e) {
                mouseOffset = e.getPoint();
            }

            @Override
            public void mouseDragged(MouseEvent e) {
                Point newLocation = e.getLocationOnScreen();
                newLocation.translate(-mouseOffset.x, -mouseOffset.y);
                frame.setLocation(newLocation);
            }

        });

    }

someone who knows what i did wrong?



2023-09-29

nginx failing to load ssl certificate

The Problem

I used mkcert -install then mkcert my-dev-env.local 127.0.0.1 localhost to make local SSL certificates for a Django project using Docker on Windows but get a "This site can’t provide a secure connection" error when I try to access https://localhost. In the Docker log the output was:

2023-09-26 18:19:47 nginx.1     | 2023/09/27 00:19:47 [error] 38#38: *10 cannot load certificate "data:": PEM_read_bio_X509_AUX() failed (SSL: error:0480006C:PEM routines::no start line:Expecting: TRUSTED CERTIFICATE) while SSL handshaking, client: 172.18.0.1, server: 0.0.0.0:443

What I Tried

Followed the directions to set up a new Django project with Docker using the Cookiecutter-Django template. Did everything down through the "Run the Stack" section, and the local development website looked good on localhost.

Skipped down to "Developing locally with HTTPS" section and followed those directions. The directions don't specify how to change the files from .pem to .crt or .key, but I just renamed them on the first try. The rest of the template website still works fine, but when I go to https://localhost I get a "This site can’t provide a secure connection" error.

I tried changing:

-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

to

-----BEGIN TRUSTED CERTIFICATE-----
...
-----END TRUSTED CERTIFICATE-----

in a text editor and still got the same error messages as above.

I tried using openssl to change the files from .pem to .crt and .key while using the -trustout command as recommended in this answer, but got the same error message.

$ openssl x509 -in my-dev-env.local.pem -trustout -out my-dev-env.local.crt
$ openssl rsa -in my-dev-env.local-key.pem -out my-dev-env.local.key

I tried validating the keys as recommended in this answer, both before and after the above changes, and there were never any errors doing this.

$ openssl x509 -noout -text -in my-dev-env.crt
$ openssl rsa -noout -text -in my-dev-env.key

I tried a number of other things that probably make no sense, but I don't really know what I'm doing. The only one that might be interesting is when I put TRUSTED with BEGIN CERTIFICATE but not END CERTIFICATE, either before rebuilding and running the Docker stack in which case the error read "This site can’t be reached" or without rebuilding the stack just restarting the nginx container which would get an infinitely repeating error in the Docker log that reads:

2023-09-26 18:50:55 nginx.1     | 2023/09/27 00:50:55 [emerg] 452#452: cannot load certificate "/etc/nginx/certs/my-dev-env.local.crt": PEM_read_bio_X509_AUX() failed (SSL: error:04800066:PEM routines::bad end line)

Here the problem is obviously the bad end line, but at least it shows it is trying to access the correct key.



Does PyTorch support stride_tricks as in numpy.lib.stride_tricks.as_strided?

It is possible to make cool things by changing the strides of an array in Numpy like this:

import numpy as np
from numpy.lib.stride_tricks import as_strided

a = np.arange(15).reshape(3,5)

print(a)
# [[ 0  1  2  3  4]
#  [ 5  6  7  8  9]
#  [10 11 12 13 14]]

b = as_strided(a, shape=(3,3,3), strides=(a.strides[-1],)+a.strides)

print(b)
# [[[ 0  1  2]
#   [ 5  6  7]
#   [10 11 12]]

#  [[ 1  2  3]
#   [ 6  7  8]
#   [11 12 13]]

#  [[ 2  3  4]
#   [ 7  8  9]
#   [12 13 14]]]


# Get 3x3 sums of a, for example
print(b.sum(axis=(1,2)))
# [54 63 72]

I searched a similar method in PyTorch and found as_strided, but it does not support strides which makes an element have multiple indices referring to it, as the warning says:

The constructed view of the storage must only refer to elements within the storage or a runtime error will be thrown, and if the view is “overlapped” (with multiple indices referring to the same element in memory) its behavior is undefined.

In particular it says that the behavior is undefined for the example above where elements have multiple indices.

Is there a way to make this work (with documented, specified behavior)? If not, then why PyTorch does not support this?



2023-09-28

Multiple PopoverTip Modifiers in SwiftUI: Persistent Display Glitch

I've encountered an issue when attempting to add multiple popoverTip modifiers in my SwiftUI code. Regardless of whether there's a specified rule or parameter, the tips begin to constantly appear and disappear. Is this a recognized issue? How can we sequentially display multiple tip popovers on complex views? Even when one tip is invalidated, the glitch persists. Should this be used only for views without any state updates?

Here's a sample code that demonstrates the problem:

import SwiftUI
import TipKit

@main
struct testbedApp: App {
    var body: some Scene {
        WindowGroup {
          ContentView()
        }
    }
  
  init() {
    try? Tips.configure()
  }
}

struct PopoverTip1: Tip {
    var title: Text {
        Text("Test title 1").foregroundStyle(.indigo)
    }

    var message: Text? {
        Text("Test message 1")
    }
}

struct PopoverTip2: Tip {
    var title: Text {
        Text("Test title 2").foregroundStyle(.indigo)
    }

    var message: Text? {
        Text("Test message 2")
    }
}

struct ContentView: View {
    private let timer = Timer.publish(every: 0.001, on: .main, in: .common).autoconnect()
  
    @State private var counter = 1
    
    var body: some View {
        VStack(spacing: 20) {
            Spacer()
            Text("Counter value: \(counter)").popoverTip(PopoverTip1())
            Spacer()
            Text("Counter value multiplied by 2: \(counter * 2)")
                .foregroundStyle(.tertiary)
                .popoverTip(PopoverTip2())
            Spacer()
        }
        .padding()
        .onReceive(timer) { _ in
          counter += 1
        }
    }
}

#Preview {
    ContentView()
}


2023-09-27

Sending POST returns 405 while my routes are OK

I have a weird problem. I try to log in with POST (username/password) to my application with this route

Route::post('login', 'AuthController@postLogin')->name('login.post');

I get a 405

The POST method is not supported for this route. Supported methods: GET, HEAD

In the local development environment everything is fine. In a production server environment I get this error.

I use Postman fos testing.

This is my route list

  • GET|HEAD | /
  • GET|HEAD | domains
  • GET|HEAD | login
  • POST | login
  • GET|HEAD | research-programs
  • GET|HEAD | researchers
  • GET|HEAD | researchers/profile
  • PUT | researchers/profile
  • POST | researchers/profile/picture
  • GET|HEAD | researchers/random
  • GET|HEAD | researchers/search
  • GET|HEAD | researchers/{id}
  • PUT | researchers/{id}
  • POST | researchers/{id}/picture
  • GET|HEAD | units

I added a GET /login route and dumped the \Illuminate\Http\Request object to see what I got.

These lines tell me that there is a redirect?

"REDIRECT_REDIRECT_REDIRECT_STATUS" => "200"

"REDIRECT_REDIRECT_REDIRECT_UNIQUE_ID" => "ZQ27NKn65K-juEgBrlpqKgAAAMw"

"REDIRECT_REDIRECT_REDIRECT_URL" => "/login/"

"REDIRECT_REDIRECT_REQUEST_METHOD" => "POST"

"REDIRECT_REDIRECT_REQUEST_SCHEME" => "https"

"REDIRECT_REDIRECT_REQUEST_URI" => "/login/?XDEBUG_SESSION_START=PHPSTORM"

"REDIRECT_REDIRECT_SERVER_PORT" => "443"

"REDIRECT_REDIRECT_SERVER_PROTOCOL" => "HTTP/1.1"

"REDIRECT_REDIRECT_STATUS" => "500"

...

"REDIRECT_REDIRECT_UNIQUE_ID" => "ZQ27NKn65K-juEgBrlpqKgAAAMw"

"REDIRECT_STATUS" => "500"

"REDIRECT_UNIQUE_ID" => "ZQ27NKn65K-juEgBrlpqKgAAAMw"

"REDIRECT_URL" => "/500.shtml"

"REMOTE_PORT" => "54560"

"REQUEST_METHOD" => "GET"

"REQUEST_SCHEME" => "https"

This is my api.php

Route::get('/', function(){
   return "welcome";
});
Route::post('/login', 'AuthController@postLogin')->name('login.post');
Route::get('/researchers', 'ResearcherController@allResearchers')->middleware('admin');
Route::get('/researchers/search', 'ResearcherController@searchResearchers');
Route::get('/researchers/random', 'ResearcherController@randomResearchers');
Route::get('/researchers/profile', 'ResearcherController@getOwnData')->middleware('authenticated');
Route::put('/researchers/profile', 'ResearcherController@putOwnData')->middleware('authenticated');
Route::post('/researchers/profile/picture', 'ResearcherController@changeOwnPicture')->middleware('authenticated');
Route::get('/researchers/{id}', 'ResearcherController@getResearcher');
Route::put('/researchers/{id}', 'ResearcherController@putResearcher')->middleware('admin');
Route::post('/researchers/{id}/picture', 'ResearcherController@postResearcherPicture')->middleware('admin');
Route::get('/domains', 'DomainController@getDomains');
Route::get('/faculty-thematics', 'FacultyThematicController@getFacultyThematics');
Route::get('/research-domain-thematics', 'ThematiqueDomaineRechercheController@getThematiqueDomaineRecherche');
Route::get('/research-programs',  'ResearchProgramController@getResearchPrograms');
Route::get('/search-axes',  'SearchAxeController@getSearchAxes');
Route::get('/units', 'UnitController@getUnits');

I've been struggling with this for hours now.



Asymmetric Encryption of data with BCrypt.lib at c++ side and decrypting with RSACryptoServiceProvider at c# side. (Parameter is Incorrect)

I am having a windows service written in .NET (C#) and a library written in CPP. I am generating the Public/Private Key pair in .NET side with RSACryptoServiceProvider and send the Public Key information to CPP side. At c++ end, After importing the Public Key received from C#, I am Encrypting my data with BCrypt.lib library and returning it to .net service by encoding it into base64. Now at .Net side, I am decrypting the encoded data received from c++ using RSACryptoServiceProvider.

During RSA.Decrypt() method call I am getting below exception: CryptographicException: 'The parameter is incorrect'

It seems some encoding issue to me but I am no expert here.

I have tried below solutions:

  • Using Convert.FromBase64String() method to convert the received base64 data into byte array.
  • I have tried to reverse the byte array received from above conversion.

Below are the code snippet I am working on.

.Net Side Code:

var rsa = RSACryptoServiceProvider.Create(1024);
var publicKey = rsa.ToXmlString(false);

var encryptedData = Convert.FromBase64String(GetEncryptedDataFromCPP(publicKey));
//Array.Reverse(encryptedData); 

byte[] decryptedData = rsa.Decrypt(encryptedData, RSAEncryptionPadding.Pkcs1); //CryptographyException: Invalid Parameter
var decryptedDataValue = Encoding.UTF8.GetString(decryptedData);

.C++ Side Code:

VOID GetEncryptedDataFromCPP(publicKey)
{
    BYTE* pbPublicKey = NULL;
    DWORD cbExp = 3;
    DWORD cbModulus = 128;
    DWORD cbKey = cbExp + sizeof(BCRYPT_RSAKEY_BLOB) + cbModulus;
    BCRYPT_RSAKEY_BLOB* pRsaBlob;
    PBYTE pbCurrent;

//**Assuming I have fetched the Modulus and Exponent from the xml string and assigned as below.**
    std::string modulus = "3XCSEveWJ3Mp41g5VxcmmlCYDL5X+VUX1ULOIl8TdsEu6bbS/Ho0ofBgAwglCrbRgAjm7ZW+EivEVLZRx5FVsEYqGX12fFZSn84Ye6D2rUYqvwR0kBE8MBCdirqg3gXAlmuIgxucWcxiT9NDTaC67Awe9yyQv3fJ2uPeOEXw0LU=";
    std::string exponent = "AQAB";

    std::vector<BYTE> PubKeyModulus_bin = base64_decode(PubKeyModulus);
    std::vector<BYTE> PubKeyExp_bin = base64_decode(PubKeyExp);
    
    pbPublicKey = (BYTE*)CoTaskMemAlloc(cbKey);
    ZeroMemory(pbPublicKey, cbKey);
    pRsaBlob = (BCRYPT_RSAKEY_BLOB*)(pbPublicKey);
    // Make the Public Key Blob Header
    pRsaBlob->Magic = BCRYPT_RSAPUBLIC_MAGIC;
    pRsaBlob->BitLength = 128*8;
    pRsaBlob->cbPublicExp = 3;
    pRsaBlob->cbModulus = 128;
    pRsaBlob->cbPrime1 = 0;
    pRsaBlob->cbPrime2 = 0;
    
    BCRYPT_ALG_HANDLE hAlgorithm = NULL;
    BCRYPT_KEY_HANDLE hKey = NULL;
    NTSTATUS status;

    BYTE textData[] = "test";
    DWORD textDataSize = sizeof(textData);
    
    status = BCryptOpenAlgorithmProvider(&hAlgorithm,
        BCRYPT_RSA_ALGORITHM,
        NULL,
        0);
    if (!NT_SUCCESS(status)) {
        printf("Failed to get algorithm provider..status : %08x\n", status);
    }

    status = BCryptImportKeyPair(hAlgorithm,
        NULL,
        BCRYPT_RSAPUBLIC_BLOB,
        &hKey,
        (PUCHAR)pbPublicKey,
        cbKey,//155,
        BCRYPT_NO_KEY_VALIDATION);
    if (!NT_SUCCESS(status)) {
        printf("Failed to import Private key..status : %08x\n", status);
    }  
        
    status = BCryptEncrypt(hKey,
        textData,
        textDataSize,
        NULL,
        NULL,
        0,
        NULL,
        0,
        &encryptedBufferSize,
        BCRYPT_PAD_PKCS1
    );
    if (!NT_SUCCESS(status)) {
        printf("Failed to get required size of buffer..status : %08x\n", status);
    }

    encryptedBuffer = (PUCHAR)HeapAlloc(GetProcessHeap(), 0, encryptedBufferSize);

    if (encryptedBuffer == NULL) {
        printf("failed to allocate memory for blindedFEKBuffer\n");
    }

    status = BCryptEncrypt(hKey,
        textData,
        textDataSize,
        NULL,
        NULL,
        0,
        encryptedBuffer,
        encryptedBufferSize,
        &encryptedBufferSize,
        BCRYPT_PAD_PKCS1
    );

    if (!NT_SUCCESS(status)) {
        printf("Failed encrypt data..status : %08x\n", status);
    }
    printf("Encrypted Data\n");
    printMem(encryptedBuffer, encryptedBufferSize);
    printf("\n\n");

    std::string encryptedDataReturn = base64_encode(&encData[0], encDataSize);

    }
}

ReverseMemCpy (In Case of Little Endian, I am hoping this is the correct way)

void ReverseMemCopy(BYTE* pbDest, BYTE const* pbSource, DWORD cb)
{
    for (DWORD i = 0; i < cb; i++)
    {
        //pbDest[cb - 1 - i] = pbSource[i];   // in case of Big Endian
        pbDest[i] = pbSource[i];
    }
}

Required Input and Output from Code:

Input Data: test

My Public/Private Key Pair: (Generated at .Net Side)

<RSAKeyValue><Modulus>3XCSEveWJ3Mp41g5VxcmmlCYDL5X+VUX1ULOIl8TdsEu6bbS/Ho0ofBgAwglCrbRgAjm7ZW+EivEVLZRx5FVsEYqGX12fFZSn84Ye6D2rUYqvwR0kBE8MBCdirqg3gXAlmuIgxucWcxiT9NDTaC67Awe9yyQv3fJ2uPeOEXw0LU=</Modulus><Exponent>AQAB</Exponent><P>8uB+2rMMnduKEZ/j9pIkNuHPjqOaeBi0DMkfVTHlrknVdDwCreKVHEx9XIEyYQeYdpCwmj8hwHMEVmHJhVUcjw==</P><Q>6WeNjG2cOZ6y6e+A0k12Bn5UX/HNgeBjdfyy67PG9FMioJ9znAZsJmM5dWaQD9Px3OaHEp5tJhlqrUc6U25oew==</Q><DP>FMpW0Y3GJLUoSn3vW6oC45fM1p72mBU1RGrq/bX5vUOgvARvDkd5ECUUDhkZIOkviea0119UGk8+Lc7NG1a/zQ==</DP><DQ>Sda9vAhNHRlspn9jdKSWyxUaIkQ/7G+NZ50rCVAVh+PpF4F6NIj/m+FWIyLwPmGhqW2wm55ND3mI+wqGlDBgkw==</DQ><InverseQ>Pz8NIq8+1o6PXWdWJUJPyV1Wli9NdK5RlH8yc44QJYzAxcEFnI8CPHkQu0BHrN+mfOX9UN7LfHjI9wmOVStksw==</InverseQ><D>NjayPJyLEXt7fOKDn1PWqp8iqrQLO8ree+LQLtASJtfjEWsmOpP8wMzl5LggwX/CyNLlHrOzhiVa+tZsLSziykG4CzY1qwL6HS+oSoR7GbjkSZXQPbN8RM2tS8fZ0ZyRAtn7ohDRFNMZe6Y+cFQ3H2ijARpVl4VngTqyK/Syyz0=</D></RSAKeyValue>"

Encrypted Data in Base64:

kycXjy03kP+VjWP4uYFMl4/avSOhJ269BZM/AeEj0RQmSgkfA+m9woENkDVqQuxOuw8/DqpeNreA7p11QOu3i5WNJ2wC2zhCVgXi0z+tjylQidAKiwNFNlvEfAQN3h18F/gLKkuCH7W3a7tqigxZc2jCOflA4ZeGx54ZL+gVDAw=

Encrypted Data in hex/BYTE form:

93 27 17 8f 2d 37 90 ff 95 8d 63 f8 b9 81 4c 97 8f da bd 23 a1 27 6e bd 05 93 3f 01 e1 23 d1 14 26 4a 09 1f 03 e9 bd c2 81 0d 90 35 6a 42 ec 4e bb 0f 3f 0e aa 5e 36 b7 80 ee 9d 75 40 eb b7 8b 95 8d 27 6c 02 db 38 42 56 05 e2 d3 3f ad 8f 29 50 89 d0 0a 8b 03 45 36 5b c4 7c 04 0d de 1d 7c 17 f8 0b 2a 4b 82 1f b5 b7 6b bb 6a 8a 0c 59 73 68 c2 39 f9 40 e1 97 86 c7 9e 19 2f e8 15 0c 0c

Exception at .Net Side in Decrypt()

System.Security.Cryptography.CryptographicException: 'The parameter is incorrect



$wp_customize is not displaying section, control, settings for custom page template - Wordpress

$wp_customize isn't showing the add section, control or settings i've set up for a custom template under the "Customize" feature on wordpress.

I figured out that initially my require_once path was wrong and fixed that to the below

elseif  (is_page('annual-report')) {
require_once(TEMPLATEPATH . '\pageFunctions\annualReportFunctions.php');

   // include_once(dirname(__DIR__).'/BOAT-wordpress-theme/pageFunctions/annualReportFunctions.php');
   }

I intentionally made an error in the annualReportFunctions.php file because I wanted to test and be sure if wordpress was actually able to locate and load that file now. Good news is that an error message confirmed it can!

The bad news is that when I took the error out although WP is reading the file associated with the page it's on, it's still not showing the options in the customize feature.

I reviewed https://developer.wordpress.org/themes/customize-api/customizer-objects/ a few times to try and understand if I'm missing something in the section, control or settings section that would prevent it from displaying.

I commented out all the code in my annualReportFunctions.php file and posted in the example code from here: https://developer.wordpress.org/reference/hooks/customize_register/ to see if it would show up, and nothing showed up.

I've looked up my problem on stackoverflow and haven't been able to figure out what i'm doing wrong yet.

I'm incredibly frustrated and confused - I would really appreciate any help if anyone has any insight. It's loading the web page just fine without any error codes. Thank you for any insight

functions.php file:

<?php
function add_css()
{
   wp_register_style('style', get_template_directory_uri() . '/assets/css/style.css', false,'1.1','all');
   wp_enqueue_style( 'style');
}
add_action('wp_enqueue_scripts', 'add_css');

function add_script()
{
wp_register_script('js-script', get_template_directory_uri() . '/assets/js/scripts.js', array ( 'jquery' ), 1.1, true);
wp_enqueue_script( 'js-script');
}
add_action('wp_enqueue_scripts', 'add_script');
add_theme_support( 'menus' );

function add_checkPage() {
//Find Your Trip Page
if (is_page('find-your-trip')) {
   require_once(TEMPLATEPATH . '\pageFunctions\findYourTripFunctions.php');
}
//Homepage
elseif (is_page('home')) {
   require_once(TEMPLATEPATH . '\pageFunctions\homeFunctions.php');
}
//Who We Are Page
elseif (is_page('who-we-are')) {
   require_once(TEMPLATEPATH . '\pageFunctions\whoWeAreFunctions.php');

}
//What We Do Page
elseif (is_page('what-we-do')) {
   require_once(TEMPLATEPATH . '\pageFunctions\whatWeDoFunctions.php');

}
//Support Our Mission Page
elseif (is_page('support-our-mission')) {
   require_once(TEMPLATEPATH . '\pageFunctions\supportOurMissionFunctions.php');

}
   //Annual Report Page
elseif  (is_page('annual-report')) {
require_once(TEMPLATEPATH . '\pageFunctions\annualReportFunctions.php');

   // include_once(dirname(__DIR__).'/BOAT-wordpress-theme/pageFunctions/annualReportFunctions.php');
   }
}
// wp_reset_query();

         ?>

page-{annualReport}.php file:

<?php
/*
 * Template Name: Annual Report
 * description: >-
Annual report page */ ?>
<?php get_header();
add_css();
add_checkPage();
add_script(); ?>
<body>
<div class="sectionHeader">
      <div class="annualReport headerBanner">
      <img src="<?php echo wp_get_attachment_url(get_theme_mod('boat-annualreport-callout-image')) ?>" />
      </div>
      <div class="textBlockUnanimated rightBlock">
      <h1><?php echo get_theme_mod('boat-annualreport-callout-headline') ?></h1>

        <p>
          <a
            href="https://drive.google.com/file/d/1Vapxlp-5_y_MpRUFZllO7XojiHKuDVwa/view"
          >
            <button class="intro btn enterSite" id="homepage">
              <span class="spanText">2022</span>
            </button></a
          >
          <a
            href="https://theboatbus.com/wp-content/uploads/2022/01/BOAT-Annual-Report-2020-2021.pdf?189db0&189db0"
          >
            <button class="intro btn enterSite" id="homepage">
              <span class="spanText">2020-2021</span>
            </button></a
          ><br />
          <a
            href="https://theboatbus.com/wp-content/uploads/2022/01/BOAT-Annual-Report-2019.pdf?189db0&189db0"
          >
            <button class="intro btn enterSite" id="homepage">
              <span class="spanText">2019</span>
            </button></a
          >
        </p>
      </div>
    </div>
<?php get_footer(); ?>

annualReportFunctions.php file:

   <?php

   function boat_annualreport_callout($wp_customize) {

   $wp_customize->add_section('boat-annualreport-callout-section',
    array ('title' => 'Annual Reports'
));

      $wp_customize->add_setting('boat-annualreport-callout-image', array(
         'theme_supports' => '',
         'default' => '',
         'transport' => 'refresh',
         'sanitize_callback' => '',
          'sanitize_js_callback' => ''
      ));
   
      $wp_customize->add_control( new WP_Customize_Cropped_Image_Control($wp_customize, 'boat-annualreport-callout-image-control', array (
      'type' => 'image',
      'label' => 'Annual Report',
      'section' => 'boat-annualreport-callout-section',
      'settings' => 'boat-annualreport-callout-image',
      'height' => 590,
      'width' => 1920
      
      )));
   }
      add_action('customize_register', 'boat_annualreport_callout');

?>


I want to store and send images to a database using Spring

I'm working on a web application, and I'm trying to send a POST request to a specific endpoint (/board/writepro). However, I'm encountering the following error message: I'm using Spring, and I'm getting an error message that says:

Content-Type 'application/x-www-form-urlencoded;charset=UTF-8' is not supported

when making a POST request. How can I resolve this issue? Here's my controller class: I am working with a React user on a project

package org.polyproject.fishinghubpro.controller;


import jakarta.transaction.Transactional;
import lombok.extern.slf4j.Slf4j;
import org.polyproject.fishinghubpro.dto.BoardDto;
import org.polyproject.fishinghubpro.entity.Board;
import org.polyproject.fishinghubpro.service.BoardService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
import org.springframework.data.web.PageableDefault;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;

import java.security.Principal;
import java.util.List;

@RestController
@CrossOrigin("*")
@Transactional
@Slf4j
public class BoardController {


    @Autowired
    private BoardService boardService;

    @GetMapping("/board/write")//localhost:8090/board/write
    public String boardWriteForm(){

        return "boardwrite";
    }

    //원본
    @PostMapping("board/writepro")
    public ResponseEntity<BoardDto> boardWritePro(@RequestBody Board board, MultipartFile file, Principal principal) throws Exception {
        String userId = principal.getName();

        // boardService.write 메서드로 게시글을 작성하고 게시글 정보를 받아옴
        BoardDto createdBoard = boardService.write(board, file, userId);

        // ResponseEntity로 커스텀 응답을 생성하고, 상태 코드와 함께 반환
        return new ResponseEntity<>(createdBoard, HttpStatus.CREATED); // 수정: response 대신 createdBoard 반환
    }





@GetMapping("/board/list")
public ResponseEntity<List<Board>> boardList(@PageableDefault(page = 0, size = 10, sort = "id", direction = Sort.Direction.DESC) Pageable pageable) {
    Page<Board> list = boardService.boardList(pageable);

    // 게시글 목록을 리스트로 변환
    List<Board> boardList = list.getContent();

    // ResponseEntity를 사용하여 JSON 형식으로 데이터 반환
    return new ResponseEntity<>(boardList, HttpStatus.OK);
}

    @GetMapping("/board/view")//localhost:8080/board/view?id=1
    public String boardView(Model model, Integer id){


        model.addAttribute("board",boardService.boardView(id));
        return "boardview";
    }
    @DeleteMapping("/board/{id}")
    public ResponseEntity<String> deleteBoard(@PathVariable("id") Integer id) {
        try {
            // 게시글 삭제를 시도합니다.
            boardService.boardDelete(id);

            // 게시글이 성공적으로 삭제되었을 때의 응답입니다.
            return ResponseEntity.noContent().build();
        } catch (Exception e) {
            // 게시글 삭제 중 오류가 발생했을 때의 응답입니다.
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("게시글 삭제 중 오류가 발생했습니다.");
        }
    }


    @GetMapping("/board/modify/{id}")
    public String boardModify(@PathVariable("id")Integer id,Model model){

        model.addAttribute("board",boardService.boardView(id));
        return "boardmodify";
    }
    @PostMapping("/board/update/")
    public String boardUpdate(@RequestBody BoardDto board, Principal principal)throws Exception {
        log.info("지나갑니다!~!==={}",board.getId());
        String userId = principal.getName();    //게시글을 쓴 애의 아이디.
        Board boardTemp = boardService.boardView(board.getId());   //기존에 저장된 게시글 가져오기

        //기존에 저장된 게시글에 새로 들어온 데이터로 수정 해주는 구문.
        boardTemp.setTitle(board.getTitle());   //dirtyChecking으로 변경감지로 수정. 따로 저장 구문 필요 없음.
        boardTemp.setContent(board.getContent());
//        boardService.write(boardTemp, file, userId); // 수정된 내용을 저장
        return "수정되었습니다.";
    }

}

package org.polyproject.fishinghubpro.dto;

import lombok.Builder;
import lombok.Data;
import lombok.Getter;
import lombok.Setter;
import org.polyproject.fishinghubpro.entity.BaseEntity;

import java.util.Date;

@Data
@Builder
public class BoardDto {

    @Setter@Getter
    private int id;
    private String title;
    private String content;
    private String filename;
    private String filepath;
    private Date createdAt;
    private String memberNick; // 회원 닉네임

    // 생성자, getter, setter 생략
}
package org.polyproject.fishinghubpro.entity;

import com.fasterxml.jackson.annotation.JsonIgnore;
import jakarta.persistence.*;
import lombok.Builder;
import lombok.Data;
import lombok.ToString;
import org.hibernate.annotations.CreationTimestamp;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;

@Entity
@Data
public class Board{

    @Id
    @GeneratedValue(strategy= GenerationType.IDENTITY)
    private Integer id;

    private String title;

    @Column(columnDefinition = "TEXT", nullable = false) // content를 TEXT 타입으로 설정하고 NOT NULL 제약 조건 추가
    private  String content;

    @Column(length = 15000 )
    private String filename;

    @Column(length = 30000 )
    private String filepath;

    @CreationTimestamp
    @Temporal(TemporalType.TIMESTAMP)
    @Column(name = "created_at", nullable = false, updatable = false)
    private Date createdAt;

    @ManyToOne
    @ToString.Exclude
    @JoinColumn(name="user_no")
    private Member member;
}

package org.polyproject.fishinghubpro.repository;

import jakarta.transaction.Transactional;
import org.polyproject.fishinghubpro.entity.Board;
import org.polyproject.fishinghubpro.entity.Member;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

import java.util.Optional;

@Repository
@Transactional
public interface BoardRepository  extends JpaRepository <Board, Integer> {
}

    package org.polyproject.fishinghubpro.service;

    import jakarta.transaction.Transactional;
    import org.polyproject.fishinghubpro.dto.BoardDto;
    import org.polyproject.fishinghubpro.entity.Board;
    import org.polyproject.fishinghubpro.entity.Member;
    import org.polyproject.fishinghubpro.repository.BoardRepository;
    import org.polyproject.fishinghubpro.repository.member.MemberRepository;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.data.domain.Page;
    import org.springframework.data.domain.Pageable;
    import org.springframework.stereotype.Service;
    import org.springframework.web.multipart.MultipartFile;

    import java.util.Date;
    import java.io.File;
    import java.util.List;
    import java.util.UUID;



    @Service
    @Transactional
    public class BoardService {



            @Autowired
            private BoardRepository boardRepository;

            @Autowired
            private MemberRepository memberRepository;

  

            //게시글 리스트 처리
            public Page<Board>boardList(Pageable pageable){

                return boardRepository.findAll(pageable);


            }
            //특정 게시글 불러오기
        public Board boardView(Integer id) {

            return boardRepository.findById(id).get();

        }
        //특정게시글 삭제
        public void boardDelete(Integer id){

                boardRepository.deleteById(id);
        }

        // 게시글 작성 메서드
        public BoardDto write(Board board, MultipartFile file, String userId) throws Exception {
            Member member = memberRepository.findByUserId(userId).orElse(null);

            if (board.getContent() != null && !board.getContent().isEmpty()) {
    //            // 생성 날짜 설정
    //            board.setCreatedAt(new Date());
                // 파일 저장 경로 설정
                String projectPath = System.getProperty("user.dir") + "/src/main/resources/static/files";
                // UUID를 사용하여 파일명 중복 방지
                UUID uuid = UUID.randomUUID();

                // 파일이 업로드되었을 때만 파일 처리
                if (file != null && !file.isEmpty()) {
                    String fileName = uuid + "_" + file.getOriginalFilename();
                    File saveFile = new File(projectPath, fileName);
                    file.transferTo(saveFile);
                    board.setFilename(fileName);
                    board.setFilepath("/files/" + fileName); // 파일 경로 동적 설정
                }
                board.setMember(member);

                Board savedBoard = boardRepository.save(board); // 데이터베이스에 저장

                // 게시글과 회원 닉네임을 포함한 DTO 객체 생성
                BoardDto boardDto = BoardDto.builder()
                                .title(savedBoard.getTitle())
                                        .content(savedBoard.getContent())
                                                .filepath(savedBoard.getFilepath())
                                                        .filename(savedBoard.getFilename())
                                                                .build();
                boardDto.setMemberNick(member.getUserNick());

                return boardDto;
            } else {
                // 콘텐츠가 null 또는 비어 있을 경우 예외 처리
                throw new Exception("게시물 내용을 입력해주세요.");
            }
        }
        }


Thank you for helping me fix this.



How to send requests to custom pocketbase routes, using the pocketbase javascript sdk?

I am trying to find a 'native' way to send post requests to my pocketbase server, from my svelte frontend :

main.go:

    app.OnBeforeServe().Add(func(e *core.ServeEvent) error {
        e.Router.AddRoute(echo.Route{
            Method: http.MethodPost,
            Path: "/api/myRoute",
            Handler: func(c echo.Context) error {
                // custom logic here: compute custom_value using parameters
                custom_value := "cool_value"
                return c.String(200, custom_value)
            },
        })
        return nil
    })

I would like to do something similar to the following :

Frontend.ts:

  onMount(async () => {
    // ask server for information based on a variable: example
    const example = "test";
    const example_reply = await pb.route(POST, '/api/myRoute', /*here, I would have to specify the fact that example is a form value of the POST request*/ example);
    // use example_reply
    if (example_reply === "cool_value") {
       console.log('got what I expected');
    } else {
       console.log('wrong!!!');
    }
  }

Does the function I am looking for exist, or should I use raw JS to fetch information from my custom routes ?

I have tried to use raw javascript to send a post request, which works but does not use the pocketbase package. :

const response = await fetch('/api/example', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ example: "test" }), // Convert the input to JSON
      });


Prevent screenshot with 'isSecureTextEntry' is not working on iOS 17

I am using ‘isSecureTextEntry' on iOS 16 with this solution, https://stackoverflow.com/a/76390952/22598343

extension UIView {
    func makeSecure() {
        DispatchQueue.main.async {
            let field = UITextField()
            field.isSecureTextEntry = true
            self.addSubview(field)
            field.centerYAnchor.constraint(equalTo: self.centerYAnchor).isActive = true
            field.centerXAnchor.constraint(equalTo: self.centerXAnchor).isActive = true
            self.layer.superlayer?.addSublayer(field.layer)
            field.layer.sublayers?.first?.addSublayer(self.layer)
        }
    }
}

It looks not working on iOS 17 now. Is there any other solutions?

I've tried the DRM way, but got really bad performance.



2023-09-26

Twilio Dialogflow cx one-click-integration fails with unspecified error, worked before

@edit Through the browser developer console I was able to extract an actual error message:

Invalid AvailableAddOnSid provided

But what does that actually mean. I find nothing on the web and in fact have no Twillio Add-Ons installed.

Yesterday I successfully integrated Dialogflow and Twilio per one-click-integration.

Today I wanted to do the same with another agent but always got an unspecified error when completing the last step in twilio. See screenshot of error in popup of last step in integration:

Twilio Dialogflow cx integration setup error

I did not change anything. What can it be and why isn't the error more specified?



How can I avoid that my logged-in user goes back to the login url in angular 16?

I have an angular 16 application and want to avoid that a user that is logged in can return to the login url. It should be possible with an authGuard, but I don't know how I could achieve this. In my app-routing module, I have this:

import { authGuard } from './guards/auth.guard';
import { LoginComponent } from './user/login/login.component';
import { ProfileComponent } from './user/profile/profile.component';
const routes: Routes = [
  { path: '', redirectTo: '/user/profile', pathMatch: 'full' },
  {
    path: 'user',
    children: [
      { path: 'login', component: LoginComponent },
      { path: 'profile', component: ProfileComponent, canActivate: [authGuard] },

      { path: '', redirectTo: 'profile', pathMatch: 'full' },
      { path: '**', redirectTo: 'profile', pathMatch: 'full' },
    ],
  },

  { path: '**', redirectTo: '/user/profile', pathMatch: 'full' }
]
@NgModule({
  imports: [RouterModule.forRoot(routes, { useHash: false })],
  exports: [RouterModule],
})
export class AppRoutingModule {}

and my authGuard looks like this:

import { CanActivateFn, Router } from '@angular/router';
import { AuthService } from '../user/auth.service';
import { inject } from '@angular/core';
import { JwtHelperService } from '@auth0/angular-jwt';

export const authGuard: CanActivateFn = (route, state) => {
  const token = localStorage.getItem('access_token');
  const jwtHelper: JwtHelperService = inject(JwtHelperService);
  const authService = inject(AuthService);
  if (token && !jwtHelper.isTokenExpired(token)) {
    return true;
  } else {
    authService.logout();
    return false;
  }
};

The token authentication works fine, and the user/profile url is properly protected by the authGuard. For user/login however, this does not make sense, since everyone should be able to go to this page.

I tried to put the authGuard at the end of the user path, so it all parent and children paths are guarded, but that causes issues with finding the correct path. I also tried to check in the authGurad if (state.url == '/user/login') {router.navigate(['/user/profile']} but this causes a circular reroute.



2023-09-25

Running different versions of Postgres

I intend to use different versions of PostgreSQL, one for creating a database for an application, and the other for setting up a development environment. How do I do so?

When I run PostgreSQL commands like pg_config, it return the version for one of the installed versions. How do ensure there is not conflict?



Grid column span to take full width when remaining columns are empty

I'm working on a calendar element whereby the day events are utilising grid row to span over their specified time frame.

However, I seem to be struggling when it comes to setting the event widths. By default they will take up the full width of the day when it's the only event at that time. And if they are concurring events at any point they will reduce their widths accordingly so they can all fit.

My problem occurs when I have a group of events happening at the same time, which generates multiple columns, and then an event later in the day which doesn't have any coinciding events. This event is now limited to the widths of the columns created from the events earlier in the day.

See image for context: Image of annotated demo

See below for a working demo.

(function() {

  function init() {
    let calendarElement = document.getElementsByClassName('calendar')[0],
      events = calendarElement.getElementsByClassName('calendar__event');
    _positionEventsOnGrid(events);
  }

  function _positionEventsOnGrid(events) {
    Array.from(events).forEach(event => {
      let gridRow = event.getAttribute('data-grid-row');

      if (gridRow) {
        let gridRowSpan = event.getAttribute('data-grid-row-span');

        event.style.gridRow = gridRowSpan ? `${gridRow} / span ${gridRowSpan}` : gridRow;
      }
    });
  }

  init();
})();
body {
  margin: 0;
  block-size: 100vh;
  inline-size: 100vw;
}

.calendar {
  $block: &;
  inline-size: 100%;
  block-size: 100%;
  display: grid;
  grid-template-columns: 2.5rem 1fr;
  grid-template-rows: auto 1fr;
}

.calendar__dayNames {
  grid-row: 1;
  grid-column: 2;
  display: flex;
}

.calendar__dayName {
  flex-grow: 1;
  flex-basis: 0;
}

.calendar__schedule {
  grid-row: 2;
  grid-column: 1/span 2;
  display: grid;
  grid-template-columns: 2.5rem 1fr;
  border-block-start: 1px solid #000;
}

.calendar__timeline {
  display: grid;
  grid-template-rows: repeat(19, 1.375rem);
  gap: 0 0.625rem;
  grid-column: 1/ span 2;
  grid-row: 1;
  position: relative;
}

.calendar__timelineItem {
  display: flex;
  align-items: center;
  border-block-end: 1px dotted #222;
}

.calendar__timelineItem:nth-child(even) {
  border-block-end: 1px solid #000;
}

.calendar__timelineItem:nth-child(odd):after {
  display: inline;
  content: attr(data-time);
}

.calendar__dayEventsContainer {
  position: relative;
  grid-row: 1;
  grid-column: 2;
  display: flex;
}

.calendar__dayEvents {
  display: grid;
  grid-template-rows: repeat(19, 1.375rem);
  gap: 0 0.625rem;
  border-inline-start: 1px solid #000;
  flex-grow: 1;
  flex-basis: 0;
}

.calendar__event {
  background-color: #346DA8;
  border: none;
}
<div class="calendar">
  <div class="calendar__dayNames">
    <div class="calendar__dayName">Monday</div>
    <div class="calendar__dayName">Tuesday</div>
    <div class="calendar__dayName">Wednesday</div>
    <div class="calendar__dayName">Thursday</div>
    <div class="calendar__dayName">Friday</div>
  </div>
  <div class="calendar__schedule">
    <div class="calendar__timeline">
      <div class="calendar__timelineItem" data-time="09:00"></div>
      <div class="calendar__timelineItem" data-time="09:15"></div>
      <div class="calendar__timelineItem" data-time="09:30"></div>
      <div class="calendar__timelineItem" data-time="09:45"></div>
      <div class="calendar__timelineItem" data-time="10:00"></div>
      <div class="calendar__timelineItem" data-time="10:15"></div>
      <div class="calendar__timelineItem" data-time="10:30"></div>
      <div class="calendar__timelineItem" data-time="10:45"></div>
      <div class="calendar__timelineItem" data-time="11:00"></div>
      <div class="calendar__timelineItem" data-time="11:15"></div>
      <div class="calendar__timelineItem" data-time="11:30"></div>
      <div class="calendar__timelineItem" data-time="11:45"></div>
      <div class="calendar__timelineItem" data-time="12:00"></div>
      <div class="calendar__timelineItem" data-time="12:15"></div>
      <div class="calendar__timelineItem" data-time="12:30"></div>
      <div class="calendar__timelineItem" data-time="12:45"></div>
      <div class="calendar__timelineItem" data-time="13:00"></div>
      <div class="calendar__timelineItem" data-time="13:15"></div>
      <div class="calendar__timelineItem" data-time="13:30"></div>

    </div>
    <div class="calendar__dayEventsContainer">
      <div class="calendar__dayEvents">

        <button type="button" class="calendar__event" data-grid-row="1" data-grid-row-span="6"></button>
        <button type="button" class="calendar__event" data-grid-row="1" data-grid-row-span="3"></button>
        <button type="button" class="calendar__event" data-grid-row="1" data-grid-row-span="1"></button>
        <button type="button" class="calendar__event" data-grid-row="10" data-grid-row-span="1"></button>
      </div>
      <div class="calendar__dayEvents">
      </div>
      <div class="calendar__dayEvents">
        <button type="button" class="calendar__event" data-grid-row="1" data-grid-row-span="6"></button>
        <button type="button" class="calendar__event" data-grid-row="1" data-grid-row-span="1"></button>
      </div>
      <div class="calendar__dayEvents">

      </div>
      <div class="calendar__dayEvents">
        <button type="button" class="calendar__event" data-grid-row="1" data-grid-row-span="1"></button>
      </div>
    </div>
  </div>
</div>

I have tried a few things: grid-column: 1/-1 but this will do it to all the events and therefore incorrectly format the concurrent events.

grid-auto-flow: dense however, this didn't seem to do the trick in my example.

grid-template-columns: repeat(auto-fit, minmax()), but the min value as part of the minmax needs to be a fixed size.

I do wonder whether I need to extend the positionEventsOnGrid() function so it checks whether 'grid-column: 1/-1' can be applied to specific events. If it isn't coinciding with other events at that time. However, I feel this might be quite complex so thought I'd ask around in case there was a simpler way.

I'm wanting singular events to span the full width of their day, regardless of whether there are concurrent events earlier or later in the day.



Composer Project - Redash + Redis + Postgres On Synology NAS

I am just diving into how Containers work, and have gotten pretty far, but when orchestrating a few images into one project yml file, I am running around in circles.

So I have the following images:

  1. redash/redash:latest <-- The primary app...
  2. redis:latest
  3. postgres:latest

I have all my ports open for each service, and tested that I could login to the postgres database from adminer:latest running in the same container.

I used a few different examples from the gitrepos, and other stack articles to build the bellow yml file and the help of @David Maze.

I believe now that I am missing a database setup file for Redash. The scheduler that interacts with my data base gives me: ProgrammingError: (psycopg2.ProgrammingError) relation “queries” does not exist

I found this article: discuss.redash.io; where the poster is at the same point. A respondent said "Did you create the database tables?"

It looks that I need to run this command from from one of the Redash server consoles: docker-compose run --rm server create_db but I am using Synology's ContainerManager, not Docker...

Do I need a Volume to connect to the server create_db command?

Login is disabled on that site, otherwise I would try asking this question there.

Here is my updated yml:

version: "3.1"

x-redash-service: &redash-service
  image: redash/redash:latest
  depends_on:
    - db
    - cache
  restart: always

x-redash-env: &redash-env
  PYTHONUNBUFFERED: 0
  REDASH_LOG_LEVEL: "INFO"
  REDASH_REDIS_URL: "redis://cache:6379/0"
  REDASH_DATABASE_URL: "postgresql://postgres:postgresPassword@db/postgres"

services:
  redash:
    <<: *redash-service
    command: server
    ports:
      - "5003:5000"
    environment:
      <<: *redash-env
      REDASH_WEB_WORKERS: 4

  adhoc_worker:
    <<: *redash-service
    command: worker
    environment:
      <<: *redash-env
      QUEUES: "queries"
      WORKERS_COUNT: 2

  scheduler:
    <<: *redash-service
    command: scheduler
    environment:
      <<: *redash-env
      QUEUES: "celery"
      WORKERS_COUNT: 1

  scheduled_worker:
    <<: *redash-service
    command: worker
    environment:
      <<: *redash-env
      QUEUES: "scheduled_queries,schemas"
      WORKERS_COUNT: 1

  db:
    image: postgres:latest
    restart: always
    ports:
      - "5433:5432"
    volumes:
      - ./postgres/data:/var/lib/postgresql/data:rw
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgresPassword
      POSTGRES_DB: postgresDatabase
    #command: "create_db"

  nginx:
    image: redash/nginx:latest
    ports:
      - "2501:80"
    depends_on:
      - redash
    restart: always

  cache:
    image: redis:latest
    restart: always
    depends_on:
      - db
    ports:
      - "6379:6379"
    volumes:
      - ./redis/data:/data:rw

Thanks again!

Please let me know if I should provide further details.

EDIT_1: Update yml composition and status of build

EDIT_2: Update yml composition and status of build && Narrowed scope of issue.



2023-09-24

STContains, STIntersects and STWithin return wrong result for geography

I'm using SQL server to store customers location info (longitude and latitude) and using leaflet to show them on map. And also I'm using leaflet for drawing polygon to draw city areas, I store polygons in a another SQL table with geography type, finally by using below query I want to know if a customer is inside an area(polygon) or not:

DECLARE @latitude DECIMAL(25,18);
DECLARE @longitude DECIMAL(25,18);
DECLARE @customerId BIGINT;
DECLARE @geographicalAreaId INT;
DECLARE @coordinates GEOGRAPHY;

DECLARE @isInsideArea BIT;

declare @insideCOUNT int;
SET @insideCOUNT=0;

DECLARE @point geography;
DECLARE @polygon geography;

DECLARE getCustomerGeo_CSR CURSOR FAST_FORWARD READ_ONLY 
FOR

    SELECT DISTINCT Fk_CustomerId,ca.Latitude,ca.Longitude FROM Tbl_CustomerAddresses ca
        WHERE ca.Latitude IS NOT NULL AND ca.Longitude IS NOT NULL;

OPEN getCustomerGeo_CSR; 
FETCH NEXT
FROM getCustomerGeo_CSR
INTO @customerId,@latitude, @longitude

WHILE @@FETCH_STATUS = 0
BEGIN

SET @point = geography::Point(cast(@latitude as float), cast(@longitude as float), 4326);

    DECLARE getGeoArea_CSR CURSOR FAST_FORWARD READ_ONLY 
    FOR
        SELECT ga.GeographicalAreaId,ga.Coordinates               
        FROM   Tbl_GeographicalAreas ga         
    
    OPEN getGeoArea_CSR; 
    FETCH NEXT
    FROM getGeoArea_CSR
    INTO @geographicalAreaId, @coordinates

    WHILE @@FETCH_STATUS = 0
    BEGIN
        
        SET @polygon = geography::STGeomFromText((SELECT Coordinates FROM Tbl_GeographicalAreas WHERE GeographicalAreaId = @geographicalAreaId).STAsText(),4326);


        IF @polygon.STContains(@point) = 1
        BEGIN
            SET @insideCOUNT = @insideCOUNT+1;
        END
        
         FETCH NEXT
         FROM getGeoArea_CSR
         INTO @geographicalAreaId, @coordinates
    END
    CLOSE getGeoArea_CSR;
    DEALLOCATE getGeoArea_CSR;
    

    FETCH NEXT
    FROM getCustomerGeo_CSR
    INTO @customerId,@latitude, @longitude
END
CLOSE getCustomerGeo_CSR;
DEALLOCATE getCustomerGeo_CSR;

print @insideCOUNT;

but I always get wrong value....

here is one of my polygons:

POLYGON ((46.389019 38.033642, 46.388397 38.029045, 46.386788 38.027253, 46.383269 38.024701, 46.37872 38.021252, 46.375308 38.020238, 46.374493 38.021861, 46.375351 38.023179, 46.37445 38.02487, 46.37327 38.025478, 46.371167 38.026543, 46.368678 38.026205, 46.367347 38.02727, 46.364343 38.028318, 46.367648 38.030076, 46.368442 38.030329, 46.3696 38.030329, 46.370029 38.030769, 46.370716 38.032036, 46.371725 38.034014, 46.372476 38.035298, 46.372626 38.035772, 46.372755 38.036819, 46.372819 38.037749, 46.373119 38.038814, 46.373441 38.039219, 46.376252 38.03785, 46.378098 38.037259, 46.380415 38.036853, 46.384835 38.036025, 46.386852 38.035079, 46.387968 38.034301, 46.388805 38.033946, 46.389019 38.033642, 46.389019 38.033642))


2023-09-23

How to load a 3d model in .glb file format in ThreeJS?

I have read the official documentation but it seems too incomplete. The web browser display a black screen but not the model. In the browser console there isn't any error.

The following is the code:

import * as THREE from 'three';
import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';

const loader = new GLTFLoader();
loader.load('cube.glb', function(gltf) {
    const renderer = new THREE.WebGLRenderer();
    renderer.setSize(window.innerWidth, window.innerHeight);
    renderer.outputColorSpace = THREE.SRGBColorSpace;

    const scene = new THREE.Scene();
    scene.add(gltf.scene);

    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
    camera.position.set(0, 0, 10);

    function animate() {
        requestAnimationFrame(animate);
        renderer.render(scene, camera);
    }

    document.body.appendChild(renderer.domElement);
    animate();
}, undefined, function(error) {
    console.error(error);
});

--- Edit September 22 ---

Now is showing the 3d model but all is black.

I editted the code in this way:

import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls';
import { GLTFLoader } from 'three/addons/loaders/GLTFLoader.js';

const loader = new GLTFLoader();

loader.load('cube.glb', function(gltf) {
    const renderer = new THREE.WebGLRenderer();
    renderer.setSize(window.innerWidth, window.innerHeight);
    renderer.outputColorSpace = THREE.SRGBColorSpace;

    const scene = new THREE.Scene();
    scene.background = new THREE.Color(0xffffff);
    scene.add(gltf.scene);

    scene.add(new THREE.AxesHelper(5));

    const light = new THREE.AmbientLight(0xff0000);
    scene.add(light);

    const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.01, 1000);
    camera.position.set(0, 0, 0.1);
    camera.lookAt(new THREE.Vector3());
    scene.add(camera);

    const controls = new OrbitControls(camera, renderer.domElement);
    controls.update();

    function animate() {
        requestAnimationFrame(animate);
        controls.update();
        renderer.render(scene, camera);
    }

    document.body.appendChild(renderer.domElement);

    animate();
}, undefined, function(error) {
    console.error(error);
});

You can see it from my own computer: http://pedrou2106.ddns.net:8080/

How to fix it?



2023-09-22

Graph (networkit) - create edges from the list of duplicated records for any columns pair in pandas

I'm trying to create graph with edges only for nodes/(records index in dataframe) that have the same values in any 2 or more columns.
What I'm doing - I create a list with all possible combination pairs of column names and go through them searching for duplicates, for which I extract indexes and create edges.
The problem is that for huge datasets (millions of records) - this solution is too slow and requires too much memory.

What I do:

df = pd.DataFrame({
    'A': [1, 2, 3, 4, 5],
    'B': [1, 1, 1, 1, 2],
    'C': [1, 1, 2, 3, 3],
    'D': [2, 7, 9, 8, 4]})  
A B C D
0 1 1 1 2
1 2 1 1 7
2 3 1 2 9
3 4 1 3 8
4 5 2 3 4

Here, rows 0 and 1 have 2 same values in columns B and C.
So, for nodes 0,1,2,3,4 I need to create edge 0-1. Other records have at maximum 1 same field between each other.

    graph = nk.Graph(num_nodes, directed=False, weighted=False)

    # Get the indices of all unique pairs
    indices = np.triu_indices(len(column_names), k=1)
    # Get the unique pairs of column names
    unique_pairs = np.column_stack((column_names[indices[0]], column_names[indices[1]]))

    for col1, col2 in unique_pairs:
        # Filter the dataframe directly
        duplicated_rows = df[[col1, col2]].dropna()
        duplicated_rows = duplicated_rows[duplicated_rows.duplicated(subset=[col1, col2], keep=False)]

    for _, group in duplicated_rows.groupby([col1, col2]):
        tb_ids = group.index.tolist()
        for i in range(len(tb_ids)):
            for j in range(i + 1, len(tb_ids)):
                graph.addEdge(tb_ids[i], tb_ids[j])

Main question - how to speed up / improve this solution? I was thinking about parallelization by column combination - but in this case can't figure out how to create edges in a graph properly.
Appreciate any help.



How do I redirect a subdomain to a subdirectory in primary domain serving as reverse-proxy?

I'll describe the current setup first followed by what I'd like to change it to.

Current Setup

All of the above works as expected.

What We Want

Now the marketing team, wanting to up the SEO game, wants blog.example.com/* to redirect to www.example.com/blog/*

What I've Tried So Far (with failure, of course)

  • Setup a Redirect on Flywheel dashboard to www.example.com/blog/$1 (expectedly resulted in infinite redirects).
  • Deleted A record from DNSSimple for blog.example.com (pointing to Flywheel IP address), and replaced with a CNAME to point to www.example.com (didn't work).

With the above given stack, how can the desired result be achieved? What you recommend a stack change to achieve the outcome?

Much thanks in advance, as I've spent a good few days trying to make this work.



2023-09-21

Microsoft 365 Dynamic group validation rule error (Memberof)

I'm looking to create a dynamic MS365 group, I simply want to apply the following dynamic rule

user.memberof -any (group.objectId -in ['objectID1','objectID2'])

When going through documentation this should be fully supported, infact I have groups created months ago which use this exact syntax. When copying rules directly from these working groups, created exactly the same way, I get the same error:

Failed to create group x. Dynamic membership rule validation error: Wrong property applied

Any thoughts?

I attempted to create a Microsoft 365 Dynamic groups within Azure AD. When creating it with a specific rule used to capture members from multiple other assigned sec groups I am given the error

Failed to create group x. Dynamic membership rule validation error: Wrong property applied

Despite this working on groups in prod. I would expect this to function and create without issue.



SQLAlchemy throws no exceptions

I am trying to append a child class to a parent class. The problem is even when I make mistakes on purpose, the code throws no exceptions but doesn't work. I have found no errors in IDE as I've tried multiple IDEs (Spyder, PyCharm, VSC) and none of them show exceptions. I've also tried to print the exceptions explicitly, and this doesn't work either (though it did work in some cases, which completely blows my mind). Moreover, the code doesn't even reach the print command I've set there. Here is what I have:

database_append_card.py:

async def append_all(message: types.Message, state: FSMContext):
    async with state.proxy() as data:
        new_card = CardBase(
            name=data['card_name'],
            front=data['front'],
            back=data['back'],
            )
    await add_child_to_db(
        child=new_card,
        column=str(message.from_user.id),
        parent_class=UserBase,
        my_async_session=async_session_maker)
    await bot.send_message(message.from_id, 'The card has been appended! ✅')

database_commands.py:

async def add_child_to_db(
    child,
    column,
    parent_class,
    my_async_session: AsyncSession):

    """ Adds a child class to parent class """

    async with my_async_session.begin() as session:
        try:
            parent = await session.execute(select(parent_class).where(parent_class.column==column))
            print(f'\n\n\n\n{parent}\n\n\n\n')
            parent.children.append(child)
        except SQLAlchemyError as exc:
            print(exc)
            raise
        finally:
            await session.close()

database_models.py:

class UserBase(Base):

    """ An account for storing and accessing multiple learning cards """

    __tablename__ = 'users'

    id = Column(Integer, primary_key=True, autoincrement=True)
    telegram_id = Column(String, unique=True)
    username = Column(String(100), unique=True)
    name = Column(String(200))
    surname = Column(String(200))
    my_cards: Mapped[list['CardBase']] = relationship()


class CardBase(Base):

    """ A learning card with front and back text bound to a specific user """

    __tablename__ = 'cards'

    id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
    user_id: Mapped[int] = mapped_column(ForeignKey('users.id', ondelete='CASCADE'),
                             nullable=False)
    name = mapped_column(Text)
    front = mapped_column(Text)
    back = mapped_column(Text)

asyncpg==0.28.0
SQLAlchemy==2.0.19
Python 3.10.12


Segmentation Fault When Sending Arrays Over a Certain Size with Open MPI

I am writing a program to run with an ifiniband and intel-based cluster using openmpi, pmix, and SLURM scheduling.

When I run my program on the cluster with an input matrix over 38x38 on each node, I get a segfault on both send/recv and collective calls. Below 38x38 on each node, there are no issues. Also, the code works on a single node and with IntelMPI. The Segfault only occurs when using multiple nodes and OpenMPI.

Here is a minimal sample code that reproduces my error:

int main(int argc, char** argv) {
    int proc_size, p_rank;

    MPI_Init (&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &proc_size);
    MPI_Comm_rank(MPI_COMM_WORLD, &p_rank); 

    int x = 1600;

    MPI_Status status;
    double* A = calloc(x, sizeof(double));

    if (p_rank == 0)
        for (int i = 1; i < proc_size; ++i) {
            MPI_Recv(A, x, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &status);
        }
    else
        MPI_Send(A, x, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);

    MPI_Finalize();

    return 0;
}

I am using srun --mpi=pmix -n 36 my_program in my sbatch job, but the segfault also occurs when using mpirun.

When X is below 1500, it produces no errors. When X is above 1500, I get something similar to the following:

[node30:14535] *** Process received signal ***
[node42:144621] *** Process received signal ***
[node42:144621] Signal: Segmentation fault (11)
[node42:144621] Signal code: Address not mapped (1)
[node42:144621] Failing at address: 0x7fcc6fdcc210
[node30:14535] Signal: Segmentation fault (11)
[node30:14535] Signal code: Address not mapped (1)
[node30:14535] Failing at address: 0x7fe9fe17d210
[node19:91882] *** Process received signal ***
[node19:91882] Signal: Segmentation fault (11)
[node19:91882] Signal code: Address not mapped (1)
[node19:91882] Failing at address: 0x7fb02739d210
srun: error: node30: task 4: Segmentation fault
srun: error: node42: task 7: Segmentation fault

I should also note that the program completes successfully despite the error, but blocks the next run from starting.



2023-09-20

How to detect strikethrough text from docx tables?

I'm using python-docx to parse some tables to dictionaries. However, some of those tables contain strikethrough text. This text needs to be excluded.

I have already found how to detect strike-through through text in paragraphs or how to apply strike-through text myself, but nowhere can I find how to check for strikethrough text in tables. As far as I can tell from the documentation, neither the Table object nor the cells have a "Run" object, which is something that Paragraphs have that contain style data.

Without the Run object, there's no style data.



2023-09-19

Why can I not deploy django project with elastic beanstalk?

I have created a zip file with all my programs and it runs well locally. For some reason elastic beanstalk gives several errors when I deploy my zip file. Such as:

"Warning: Configuration files cannot be extracted from the application version my_tennis_club2. Check that the application version is a valid zip or war file."

"Error: The instance profile aws-elasticbeanstalk-ec2-role associated with the environment does not exist."

"Error: Failed to launch environment"

I followed the tutorial https://www.w3schools.com/django/django_deploy_eb.php where I always got the same result as the tutorial up until I was gonna upload the zip file containing my entire project to elastic beanstalk

edit:

Elastic beanstalk health degraded I can't open the domain when elastic beanstalk is done loading the zip file containing my django project. I can run my project on a localhost when i write python manage.py runserver on my computer. I created an ec2 instance and an IAM role which I gave administrative access to so it should have access to both the EC2 and beanstalk. It also says "Unable to assume role "arn:aws:iam::976090601851:role/masterROLE". Verify that the role exists and is configured correctly. Impaired services on all instances." masterROLE is the name of the IAM role that I created.

My Django project files and requirements.txt Above is the directories and files included in the zip that i upload

Edit2 I created the IAM role as described but it still doesn't work. See below: enter image description here enter image description here The health is still degraded and when I try opening the URL which should work under https://sqs.eu-north-1.amazonaws.com/976090601851/awseb-e-j7uhxx3a5p-stack-AWSEBWorkerQueue-citCtAuZFkvV/members then i get UnknownOperationException. In the zip file I have also included apart from my project also a file 'requirements.txt' and and a folder .ebextensions containing django.config.

Edit3 I clicked the box "Create and use new service role". I then used the IAM role aws-elasticbeanstalk-ec2-role, which I created as described. I don't get any error messages anymore like I did before, but the health is still degraded and when i try to access the website I get "502 bad gateway" error message. Below is my eb-engine.log:

2023/09/18 19:26:28.733627 [INFO] Starting EBPlatform-PlatformEngine
2023/09/18 19:26:28.733702 [INFO] reading event message file
2023/09/18 19:26:28.758543 [INFO] Engine received EB command userdata-exec

2023/09/18 19:26:28.798950 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBAutoScalingGroup --region eu-north-1
2023/09/18 19:26:29.479440 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBBeanstalkMetadata --region eu-north-1
2023/09/18 19:26:30.314960 [INFO] This is a workflow controlled instance.
2023/09/18 19:26:30.315052 [INFO] Engine command: (env-launch)

2023/09/18 19:26:30.315547 [INFO] Executing instruction: SyncClock
2023/09/18 19:26:30.315552 [INFO] Starting SyncClock
2023/09/18 19:26:30.315567 [INFO] Running command /bin/sh -c /usr/bin/chronyc tracking
2023/09/18 19:26:30.323708 [INFO] Reference ID    : A9FEA97B (169.254.169.123)
Stratum         : 4
Ref time (UTC)  : Mon Sep 18 19:26:29 2023
System time     : 0.881987631 seconds slow of NTP time
Last offset     : -0.959451079 seconds
RMS offset      : 0.959451079 seconds
Frequency       : 4.176 ppm fast
Residual freq   : -39.628 ppm
Skew            : 2.331 ppm
Root delay      : 0.000236736 seconds
Root dispersion : 0.000381636 seconds
Update interval : 0.0 seconds
Leap status     : Normal

2023/09/18 19:26:30.323735 [INFO] Running command /bin/sh -c /usr/bin/chronyc -a makestep
2023/09/18 19:26:31.210717 [INFO] 200 OK

2023/09/18 19:26:31.210764 [INFO] Skipping Configure OS
2023/09/18 19:26:31.210771 [INFO] Skipping LockRepo
2023/09/18 19:26:31.210777 [INFO] Skipping GenerateEBBanner
2023/09/18 19:26:31.210783 [INFO] Skipping Install Process Manager
2023/09/18 19:26:31.210788 [INFO] Skipping install syslog
2023/09/18 19:26:31.210794 [INFO] Skipping install cron
2023/09/18 19:26:31.210799 [INFO] Skipping install proxy
2023/09/18 19:26:31.210804 [INFO] Skipping installhealthd
2023/09/18 19:26:31.210809 [INFO] Skipping Install Log Streaming Manager
2023/09/18 19:26:31.210815 [INFO] Skipping install X-Ray
2023/09/18 19:26:31.210820 [INFO] Skipping install Third Party License
2023/09/18 19:26:31.210825 [INFO] Skipping install httpd
2023/09/18 19:26:31.210832 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:31.210835 [INFO] Executing instruction: installSqsd
2023/09/18 19:26:31.210840 [INFO] This is a web server environment instance, skip install sqsd daemon ...
2023/09/18 19:26:31.210845 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:31.210848 [INFO] Executing instruction: bootstraphealthd
2023/09/18 19:26:31.210852 [INFO] this is an enhanced health env ...
2023/09/18 19:26:31.210864 [INFO] bootstrap healthd....
2023/09/18 19:26:31.210880 [INFO] Running command /bin/sh -c /usr/bin/id -u healthd || /usr/sbin/useradd --user-group healthd -s /sbin/nologin --create-home
2023/09/18 19:26:31.801829 [INFO] /usr/bin/id: ‘healthd’: no such user

2023/09/18 19:26:31.805718 [INFO] bootstrap healthd....
2023/09/18 19:26:31.805745 [INFO] Running command /bin/sh -c /usr/bin/id -u healthd || /usr/sbin/useradd --user-group healthd -s /sbin/nologin --create-home
2023/09/18 19:26:31.811867 [INFO] 1001

2023/09/18 19:26:31.814461 [INFO] configure bundle log for healthd...
2023/09/18 19:26:31.814557 [INFO] Executing instruction: GetSetupProxyLog
2023/09/18 19:26:31.814731 [INFO] Skipping Install yum packages
2023/09/18 19:26:31.814738 [INFO] Skipping Configure Python site-packages
2023/09/18 19:26:31.814744 [INFO] Skipping Install Python Modules
2023/09/18 19:26:31.814749 [INFO] Skipping MarkBaked
2023/09/18 19:26:31.814756 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:31.814760 [INFO] Executing instruction: TuneSystemSettings
2023/09/18 19:26:31.814763 [INFO] Starting TuneSystemSettings
2023/09/18 19:26:31.814768 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:31.815661 [INFO] Executing instruction: GetSetupLogRotate
2023/09/18 19:26:31.815666 [INFO] Initialize LogRotate files and directories
2023/09/18 19:26:31.827638 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:31.827648 [INFO] Executing instruction: BootstrapCFNHup
2023/09/18 19:26:31.827652 [INFO] Bootstrap cfn-hup
2023/09/18 19:26:31.829109 [INFO] Copying file /opt/elasticbeanstalk/config/private/aws-eb-command-handler.conf to /etc/cfn/hooks.d/aws-eb-command-handler.conf
2023/09/18 19:26:31.830671 [INFO] Executing instruction: StartCFNHup
2023/09/18 19:26:31.830680 [INFO] Start cfn-hup
2023/09/18 19:26:31.830699 [INFO] Running command /bin/sh -c systemctl show -p PartOf cfn-hup.service
2023/09/18 19:26:31.863884 [INFO] cfn-hup is not registered with EB yet, registering it now
2023/09/18 19:26:31.863938 [INFO] Running command /bin/sh -c systemctl show -p PartOf cfn-hup.service
2023/09/18 19:26:31.895860 [INFO] Running command /bin/sh -c systemctl daemon-reload
2023/09/18 19:26:32.402788 [INFO] Running command /bin/sh -c systemctl reset-failed
2023/09/18 19:26:32.415233 [INFO] Running command /bin/sh -c systemctl is-enabled aws-eb.target
2023/09/18 19:26:32.428574 [INFO] Running command /bin/sh -c systemctl enable aws-eb.target
2023/09/18 19:26:32.902723 [INFO] Running command /bin/sh -c systemctl start aws-eb.target
2023/09/18 19:26:32.926209 [INFO] Running command /bin/sh -c systemctl enable cfn-hup.service
2023/09/18 19:26:33.416376 [INFO] Synchronizing state of cfn-hup.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable cfn-hup
Created symlink /etc/systemd/system/multi-user.target.wants/cfn-hup.service → /etc/systemd/system/cfn-hup.service.

2023/09/18 19:26:33.416404 [INFO] Running command /bin/sh -c systemctl is-active cfn-hup.service
2023/09/18 19:26:33.430346 [INFO] cfn-hup process is not running, starting it now
2023/09/18 19:26:33.430376 [INFO] Running command /bin/sh -c systemctl show -p PartOf cfn-hup.service
2023/09/18 19:26:33.445103 [INFO] Running command /bin/sh -c systemctl is-active cfn-hup.service
2023/09/18 19:26:33.456117 [INFO] Running command /bin/sh -c systemctl start cfn-hup.service
2023/09/18 19:26:33.517348 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:33.517371 [INFO] Executing instruction: SetupPublishLogCronjob
2023/09/18 19:26:33.517375 [INFO] Setup publish logs cron job...
2023/09/18 19:26:33.517380 [INFO] Copying file /opt/elasticbeanstalk/config/private/logtasks/cron/publishlogs to /etc/cron.d/publishlogs
2023/09/18 19:26:33.519224 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:33.519234 [INFO] Executing instruction: MarkBootstrapped
2023/09/18 19:26:33.519238 [INFO] Starting MarkBootstrapped
2023/09/18 19:26:33.519243 [INFO] Instance has NOT been bootstrapped
2023/09/18 19:26:33.519318 [INFO] Marked instance as Bootstrapped
2023/09/18 19:26:33.519322 [INFO] Executing instruction: Save CFN Stack Info
2023/09/18 19:26:33.519368 [INFO] Starting SwitchCFNStack
2023/09/18 19:26:33.519373 [INFO] Executing cleanup logic
2023/09/18 19:26:33.519382 [INFO] Platform Engine finished execution on command: env-launch

2023/09/18 19:26:52.642020 [INFO] Starting...
2023/09/18 19:26:52.642457 [INFO] Starting EBPlatform-PlatformEngine
2023/09/18 19:26:52.642483 [INFO] reading event message file
2023/09/18 19:26:52.642649 [INFO] Engine received EB command cfn-hup-exec

2023/09/18 19:26:52.714711 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBAutoScalingGroup --region eu-north-1
2023/09/18 19:26:53.113024 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBBeanstalkMetadata --region eu-north-1
2023/09/18 19:26:53.447183 [INFO] checking whether command app-deploy is applicable to this instance...
2023/09/18 19:26:53.447198 [INFO] this command is applicable to the instance, thus instance should execute command
2023/09/18 19:26:53.447200 [INFO] Engine command: (app-deploy)

2023/09/18 19:26:53.447204 [INFO] Downloading EB Application...
2023/09/18 19:26:53.447206 [INFO] Region: eu-north-1
2023/09/18 19:26:53.447208 [INFO] envID: e-7crmv8gmvp
2023/09/18 19:26:53.447210 [INFO] envBucket: elasticbeanstalk-eu-north-1-976090601851
2023/09/18 19:26:53.447213 [INFO] Using manifest file name from command request
2023/09/18 19:26:53.447218 [INFO] Unable to get version manifest file.
2023/09/18 19:26:53.447220 [INFO] Downloading latest manifest available.
2023/09/18 19:26:53.447222 [INFO] Download latest app version manifest
2023/09/18 19:26:53.447324 [INFO] resources/environments/e-7crmv8gmvp/_runtime/versions/manifest
2023/09/18 19:26:53.490299 [INFO] latestManifest key *: resources/environments/e-7crmv8gmvp/_runtime/versions/manifest_1695065142831

2023/09/18 19:26:53.490473 [INFO] Downloading: bucket: elasticbeanstalk-eu-north-1-976090601851, object: /resources/environments/e-7crmv8gmvp/_runtime/versions/manifest_1695065142831
2023/09/18 19:26:53.499659 [INFO] Download successful105bytes downloaded
2023/09/18 19:26:53.499726 [INFO] Trying to read and parse version manifest...
2023/09/18 19:26:53.499798 [INFO] Downloading: bucket: elasticbeanstalk-eu-north-1-976090601851, object: /resources/environments/e-7crmv8gmvp/_runtime/_versions/Tobii2/tobii2
2023/09/18 19:26:53.514109 [INFO] Download successful22718bytes downloaded
2023/09/18 19:26:53.514960 [INFO] Executing instruction: ElectLeader
2023/09/18 19:26:53.514965 [INFO] Running leader election for instance i-0cce30ac7deac8671...
2023/09/18 19:26:53.514969 [INFO] Calling the cfn-elect-cmd-leader to elect the command leader.
2023/09/18 19:26:53.514980 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-elect-cmd-leader --stack arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 --command-name ElasticBeanstalkCommand-AWSEBAutoScalingGroup --invocation-id 01be5a74-e453-4f3f-9e11-77a943a84148 --listener-id i-0cce30ac7deac8671 --region eu-north-1
2023/09/18 19:26:53.847910 [INFO] Instance is Leader.
2023/09/18 19:26:53.847952 [INFO] Executing instruction: stopSqsd
2023/09/18 19:26:53.847957 [INFO] This is a web server environment instance, skip stop sqsd daemon ...
2023/09/18 19:26:53.847961 [INFO] Executing instruction: PreBuildEbExtension
2023/09/18 19:26:53.847965 [INFO] Starting executing the config set Infra-EmbeddedPreBuild.
2023/09/18 19:26:53.847979 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBAutoScalingGroup --region eu-north-1 --configsets Infra-EmbeddedPreBuild
2023/09/18 19:26:54.254406 [INFO] Finished executing the config set Infra-EmbeddedPreBuild.

2023/09/18 19:26:54.254449 [INFO] Executing instruction: StageApplication
2023/09/18 19:26:54.254618 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2023/09/18 19:26:54.254640 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2023/09/18 19:26:54.261925 [INFO] finished extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ successfully
2023/09/18 19:26:54.264236 [INFO] Executing instruction: RunAppDeployPreBuildHooks
2023/09/18 19:26:54.264289 [INFO] Executing platform hooks in .platform/hooks/prebuild/
2023/09/18 19:26:54.264303 [INFO] The dir .platform/hooks/prebuild/ does not exist
2023/09/18 19:26:54.264306 [INFO] Finished running scripts in /var/app/staging/.platform/hooks/prebuild
2023/09/18 19:26:54.264310 [INFO] Executing instruction: InstallDependency
2023/09/18 19:26:54.264314 [INFO] checking dependencies file
2023/09/18 19:26:54.264325 [INFO] Installing dependencies with requirements.txt by using Pip
2023/09/18 19:26:54.264334 [INFO] Running command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt
2023/09/18 19:26:58.406776 [INFO] Collecting asgiref==3.7.2
  Downloading asgiref-3.7.2-py3-none-any.whl (24 kB)
Collecting Django==4.2.5
  Downloading Django-4.2.5-py3-none-any.whl (8.0 MB)
Collecting sqlparse==0.4.4
  Downloading sqlparse-0.4.4-py3-none-any.whl (41 kB)
Collecting typing-extensions==4.7.1
  Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Installing collected packages: typing-extensions, sqlparse, asgiref, Django
Successfully installed Django-4.2.5 asgiref-3.7.2 sqlparse-0.4.4 typing-extensions-4.7.1

2023/09/18 19:26:58.406796 [INFO] WARNING: You are using pip version 21.3.1; however, version 23.2.1 is available.
You should consider upgrading via the '/var/app/venv/staging-LQM1lest/bin/python3.9 -m pip install --upgrade pip' command.

2023/09/18 19:26:58.406802 [INFO] Executing instruction: check Procfile
2023/09/18 19:26:58.406836 [INFO] creating default Procfile...
2023/09/18 19:26:58.407017 [INFO] Executing instruction: configure X-Ray
2023/09/18 19:26:58.407022 [INFO] X-Ray is not enabled.
2023/09/18 19:26:58.407026 [INFO] Executing instruction: configure proxy server
2023/09/18 19:26:58.412761 [INFO] Executing instruction: configure healthd specific proxy conf
2023/09/18 19:26:58.413909 [INFO] Running command /bin/sh -c systemctl show -p PartOf healthd.service
2023/09/18 19:26:58.432444 [INFO] Running command /bin/sh -c systemctl daemon-reload
2023/09/18 19:26:58.823320 [INFO] Running command /bin/sh -c systemctl reset-failed
2023/09/18 19:26:58.852867 [INFO] Running command /bin/sh -c systemctl is-enabled aws-eb.target
2023/09/18 19:26:58.864926 [INFO] Running command /bin/sh -c systemctl enable aws-eb.target
2023/09/18 19:26:59.155317 [INFO] Running command /bin/sh -c systemctl start aws-eb.target
2023/09/18 19:26:59.164703 [INFO] Running command /bin/sh -c systemctl enable healthd.service
2023/09/18 19:26:59.410005 [INFO] Created symlink /etc/systemd/system/multi-user.target.wants/healthd.service → /etc/systemd/system/healthd.service.

2023/09/18 19:26:59.410034 [INFO] Running command /bin/sh -c systemctl show -p PartOf healthd.service
2023/09/18 19:26:59.422603 [INFO] Running command /bin/sh -c systemctl is-active healthd.service
2023/09/18 19:26:59.431040 [INFO] Running command /bin/sh -c systemctl start healthd.service
2023/09/18 19:26:59.510928 [INFO] Copying file /opt/elasticbeanstalk/config/private/healthd/healthd_logformat.conf to /var/proxy/staging/nginx/conf.d/healthd_logformat.conf
2023/09/18 19:26:59.512203 [INFO] Copying file /opt/elasticbeanstalk/config/private/healthd/healthd_nginx.conf to /var/proxy/staging/nginx/conf.d/elasticbeanstalk/healthd.conf
2023/09/18 19:26:59.513383 [INFO] Executing instruction: configure log streaming
2023/09/18 19:26:59.513389 [INFO] log streaming is not enabled
2023/09/18 19:26:59.513391 [INFO] disable log stream
2023/09/18 19:26:59.513403 [INFO] Running command /bin/sh -c systemctl show -p PartOf amazon-cloudwatch-agent.service
2023/09/18 19:26:59.530550 [INFO] Running command /bin/sh -c systemctl stop amazon-cloudwatch-agent.service
2023/09/18 19:26:59.545254 [INFO] Executing instruction: GetToggleForceRotate
2023/09/18 19:26:59.545277 [INFO] Checking if logs need forced rotation
2023/09/18 19:26:59.545310 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBAutoScalingGroup --region eu-north-1
2023/09/18 19:27:00.061180 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBBeanstalkMetadata --region eu-north-1
2023/09/18 19:27:00.537901 [INFO] Generating rsyslog config from Procfile
2023/09/18 19:27:00.539458 [INFO] Running command /bin/sh -c systemctl restart rsyslog.service
2023/09/18 19:27:00.915253 [INFO] Executing instruction: PostBuildEbExtension
2023/09/18 19:27:00.915308 [INFO] Starting executing the config set Infra-EmbeddedPostBuild.
2023/09/18 19:27:00.915328 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBAutoScalingGroup --region eu-north-1 --configsets Infra-EmbeddedPostBuild
2023/09/18 19:27:01.281054 [INFO] Finished executing the config set Infra-EmbeddedPostBuild.

2023/09/18 19:27:01.281078 [INFO] Executing instruction: CleanEbExtensions
2023/09/18 19:27:01.281099 [INFO] Cleaned ebextensions subdirectories from app staging directory.
2023/09/18 19:27:01.281101 [INFO] Executing instruction: RunAppDeployPreDeployHooks
2023/09/18 19:27:01.281123 [INFO] Executing platform hooks in .platform/hooks/predeploy/
2023/09/18 19:27:01.281140 [INFO] The dir .platform/hooks/predeploy/ does not exist
2023/09/18 19:27:01.281143 [INFO] Finished running scripts in /var/app/staging/.platform/hooks/predeploy
2023/09/18 19:27:01.281150 [INFO] Executing instruction: stop X-Ray
2023/09/18 19:27:01.281153 [INFO] stop X-Ray ...
2023/09/18 19:27:01.281164 [INFO] Running command /bin/sh -c systemctl show -p PartOf xray.service
2023/09/18 19:27:01.295488 [WARN] stopProcess Warning: process xray is not registered 
2023/09/18 19:27:01.295519 [INFO] Running command /bin/sh -c systemctl stop xray.service
2023/09/18 19:27:01.310280 [INFO] Executing instruction: stop proxy
2023/09/18 19:27:01.310323 [INFO] Running command /bin/sh -c systemctl show -p PartOf httpd.service
2023/09/18 19:27:01.324857 [WARN] deregisterProcess Warning: process httpd is not registered, skipping...

2023/09/18 19:27:01.324895 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2023/09/18 19:27:01.340023 [WARN] deregisterProcess Warning: process nginx is not registered, skipping...

2023/09/18 19:27:01.340043 [INFO] Executing instruction: FlipApplication
2023/09/18 19:27:01.340047 [INFO] Fetching environment variables...
2023/09/18 19:27:01.340163 [INFO] Purge old process...
2023/09/18 19:27:01.340179 [INFO] Removing /var/app/current/ if it exists
2023/09/18 19:27:01.340190 [INFO] Renaming /var/app/staging/ to /var/app/current/
2023/09/18 19:27:01.340205 [INFO] Register application processes...
2023/09/18 19:27:01.340233 [INFO] Registering the proc: web

2023/09/18 19:27:01.340243 [INFO] Running command /bin/sh -c systemctl show -p PartOf web.service
2023/09/18 19:27:01.351323 [INFO] Running command /bin/sh -c systemctl daemon-reload
2023/09/18 19:27:01.602093 [INFO] Running command /bin/sh -c systemctl reset-failed
2023/09/18 19:27:01.610883 [INFO] Running command /bin/sh -c systemctl is-enabled eb-app.target
2023/09/18 19:27:01.618484 [INFO] Copying file /opt/elasticbeanstalk/config/private/aws-eb.target to /etc/systemd/system/eb-app.target
2023/09/18 19:27:01.619452 [INFO] Running command /bin/sh -c systemctl enable eb-app.target
2023/09/18 19:27:01.955715 [INFO] Created symlink /etc/systemd/system/multi-user.target.wants/eb-app.target → /etc/systemd/system/eb-app.target.

2023/09/18 19:27:01.955752 [INFO] Running command /bin/sh -c systemctl start eb-app.target
2023/09/18 19:27:01.977870 [INFO] Running command /bin/sh -c systemctl enable web.service
2023/09/18 19:27:02.222128 [INFO] Created symlink /etc/systemd/system/multi-user.target.wants/web.service → /etc/systemd/system/web.service.

2023/09/18 19:27:02.222164 [INFO] Running command /bin/sh -c systemctl show -p PartOf web.service
2023/09/18 19:27:02.263074 [INFO] Running command /bin/sh -c systemctl is-active web.service
2023/09/18 19:27:02.273319 [INFO] Running command /bin/sh -c systemctl start web.service
2023/09/18 19:27:02.330172 [INFO] Executing instruction: start X-Ray
2023/09/18 19:27:02.330201 [INFO] X-Ray is not enabled.
2023/09/18 19:27:02.330208 [INFO] Executing instruction: start proxy with new configuration
2023/09/18 19:27:02.330235 [INFO] Running command /bin/sh -c /usr/sbin/nginx -t -c /var/proxy/staging/nginx/nginx.conf
2023/09/18 19:27:02.377042 [INFO] nginx: [warn] could not build optimal types_hash, you should increase either types_hash_max_size: 1024 or types_hash_bucket_size: 64; ignoring types_hash_bucket_size
nginx: the configuration file /var/proxy/staging/nginx/nginx.conf syntax is ok
nginx: configuration file /var/proxy/staging/nginx/nginx.conf test is successful

2023/09/18 19:27:02.377199 [INFO] Running command /bin/sh -c cp -rp /var/proxy/staging/nginx/* /etc/nginx
2023/09/18 19:27:02.384102 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2023/09/18 19:27:02.406959 [INFO] Running command /bin/sh -c systemctl daemon-reload
2023/09/18 19:27:02.721872 [INFO] Running command /bin/sh -c systemctl reset-failed
2023/09/18 19:27:02.777897 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2023/09/18 19:27:02.795024 [INFO] Running command /bin/sh -c systemctl is-active nginx.service
2023/09/18 19:27:02.803567 [INFO] Running command /bin/sh -c systemctl start nginx.service
2023/09/18 19:27:02.900062 [INFO] Executing instruction: configureSqsd
2023/09/18 19:27:02.900086 [INFO] This is a web server environment instance, skip configure sqsd daemon ...
2023/09/18 19:27:02.900091 [INFO] Executing instruction: startSqsd
2023/09/18 19:27:02.900094 [INFO] This is a web server environment instance, skip start sqsd daemon ...
2023/09/18 19:27:02.900098 [INFO] Executing instruction: Track pids in healthd
2023/09/18 19:27:02.900102 [INFO] This is an enhanced health env...
2023/09/18 19:27:02.900117 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf aws-eb.target | cut -d= -f2
2023/09/18 19:27:02.924546 [INFO] cfn-hup.service nginx.service healthd.service

2023/09/18 19:27:02.924577 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf eb-app.target | cut -d= -f2
2023/09/18 19:27:02.940947 [INFO] web.service

2023/09/18 19:27:02.941342 [INFO] Executing instruction: RunAppDeployPostDeployHooks
2023/09/18 19:27:02.941414 [INFO] Executing platform hooks in .platform/hooks/postdeploy/
2023/09/18 19:27:02.941434 [INFO] The dir .platform/hooks/postdeploy/ does not exist
2023/09/18 19:27:02.941438 [INFO] Finished running scripts in /var/app/current/.platform/hooks/postdeploy
2023/09/18 19:27:02.941446 [INFO] Executing cleanup logic
2023/09/18 19:27:02.941694 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[{"msg":"Instance deployment successfully generated a 'Procfile'.","timestamp":1695065218406,"severity":"INFO"},{"msg":"Instance deployment completed successfully.","timestamp":1695065222941,"severity":"INFO"}]}]}

2023/09/18 19:27:02.941857 [INFO] Platform Engine finished execution on command: app-deploy

2023/09/18 19:29:51.484880 [INFO] Starting...
2023/09/18 19:29:51.484933 [INFO] Starting EBPlatform-PlatformEngine
2023/09/18 19:29:51.484954 [INFO] reading event message file
2023/09/18 19:29:51.485102 [INFO] Engine received EB command cfn-hup-exec

2023/09/18 19:29:51.561896 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBAutoScalingGroup --region eu-north-1
2023/09/18 19:29:51.974363 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:eu-north-1:976090601851:stack/awseb-e-7crmv8gmvp-stack/28f866b0-5659-11ee-b0e1-0640d6e4bb20 -r AWSEBBeanstalkMetadata --region eu-north-1
2023/09/18 19:29:52.322596 [INFO] checking whether command bundle-log is applicable to this instance...
2023/09/18 19:29:52.322611 [INFO] this command is applicable to the instance, thus instance should execute command
2023/09/18 19:29:52.322614 [INFO] Engine command: (bundle-log)

2023/09/18 19:29:52.322685 [INFO] Executing instruction: GetBundleLogs
2023/09/18 19:29:52.322689 [INFO] Bundle Logs...


2023-09-18

What's wrong with qsort comparator?

So I've been doing my homework on C, and there was this task: "An array of ints is given, write a function that sorts them in following way: first, the even numbers in non-decreasing order, second, the odd numbers in non-increasing". For example, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] -> [2, 4, 6, 8, 10, 9, 7, 5, 3, 1].

The task itself is about writing a comparator function for qsort correctly. I wrote the following cmp:

int
cmp(const void *elem1, const void *elem2)
{
    int p_number1 = *(int *) elem1, p_number2 = *(int *) elem2, diff_flag = 0;
    if (p_number1 > p_number2) {
        diff_flag = 1;
    } else if (p_number1 < p_number2) {
        diff_flag = -1;
    }
    if ((p_number1 & 1) == (p_number2 & 1)) { // same parity
        return diff_flag == 0 ? 0 : diff_flag ^ ((p_number1 & 1) << 31);
        /* in even case, diff_flag will be returned, otherwise,
         * number with sign that is different from diff_flag's will be returned */
    }
    return (p_number1 & 1) - (p_number2 & 1); // if first is odd, and second is even, 1 is returned, and vice versa
}

and the testing system spits out a runtime error. it does return INT_MIN and INT_MAX for less and bigger cases respectively, but doesn't that satisfy qsort's specifications? besides, it worked fine on all the arrays I was testing it with locally. Maybe anyone knows why is there a RE?

P.S. I understand I'm making it in the most non-readable way possible, but I am obliged to rely mostly on bit ops and do everything in most optimal way possible, so I apologize for complexity.



How to get the APP Version using a Pre-Render APEX

I'm trying to get the app version so I can log it in a table when inserting or updating data.

I'm currently doing this for APP_USER and sysdate which is working fine.

PL/SQL Code:

PL/SQL Code

I would also like to add the Name and Version.

I tested APP_NAME and that does work, but APP_Version returns nothing?

I was also wondering what other APP_ variables are available?



2023-09-17

How do you iterate over continuous subsequences of a slice that contain equal elements?

I have a sequence of elements of a type which implements PartialEq in a slice. For illustration, let's say it looks like this:

let data = [1,1,1,2,2,3,4,5,5,5,5,6];

I would like to iterate over borrowed slices of this sequence such that all elements of such slices are equal as per PartialEq. For example in the above slice data I would like an iterator which yields:

&data[0..3]   // [1,1,1]
&data[3..5]   // [2,2]
&data[5..6]   // [3]
&data[6..7]   // [4]
&data[7..11]  // [5,5,5,5]
&data[11..12] // [6]

It looks like slice::group_by is exactly what I need, but as of Rust 1.72.0, it is not yet stable. Is there any straightforward way to get this functionality in a stable way, either by use of a 3rd party crate or by combining the use of stable std lib APIs?



Can we monitor gstreamer pipeline opened by OPENCV through the gstreamer code?

I am opening an gstreamer pipeline with opencv. Now,as long as data is coming everything works fine. But, when for any reason if data stops coming, then pipeline is stucked and because of that opencv is stucked. Below is the code which i am using:

#define UDP_URL "udpsrc port=15004 buffer-size=5000000 ! watchdog timeout=1000 ! tsdemux latency=0 ! h264parse ! v4l2h264dec ! imxvideoconvert_g2d ! video/x-raw,format=BGRA,width=1280,height=960 ! appsink max-buffers=2"
int main()
{

    cv::VideoCapture video;
    cv::Mat frame;
    video.open(Stream_URL, cv::CAP_GSTREAMER);
    if (!video.isOpened()) {
        printf("Error in opening.\n");
        return -1;
    }

    while(1) {

        if(video.read(frame))
        {
            // some operation on frame.
        }
        else
            break;
    }

    video.release();
    return 0;
}

In above code when there is no data on port 15004, then video.read(frame) function gets stucked, especially the v4l2h264dec decoder. I think this decode is getting stucked. And when data again starts coming still it is stucked on the same function. During gstreamer debugging I got "gstreamer pipeline is in halt state", even though I can that the data is coming on the port 15004 with the help of tcpdump command. I am thinking of using gstreamer code to monitor the pipeline, but I don't how do I monitor. Also i am using IMX8QM board, the gstreamer version is 1.0 and opencv version is 4.6.0.

I did not try any code for monitoring, because I do not know how we can access the opencv backend, from the source file.