2022-02-28

Tailwind css V3- custom colors with string interpolation

I've just upgraded tailwind in my react project to get rid of this warning:

warn - The `purge`/`content` options have changed in Tailwind CSS v3.0.
warn - Update your configuration file to eliminate this warning.
warn - https://tailwindcss.com/docs/upgrade-guide#configure-content-sources

As I understand it the 'purge' property is now replaced with 'content'. I replaced the property name and for the most part everything worked the same. However in my old Tailwind config file I needed to safelist a bunch of colors in order to make them show up. Without the purge property to use I don't know where to put the safelist

module.exports = {
  mode: 'jit',

  purge: {
    content: ['./src/**/*.{js,jsx,ts,tsx}'],
    options: {
      safelist: [
        'bg-[#ffffff]',
        'bg-[#d9ffff]',
        'bg-[#cc80ff]',
        'bg-[#c2ff00]',
        'bg-[#ffb5b5]',
        'bg-[#909090]',
        'bg-[#3050f8]',
        'bg-[#ff0d0d]',
        'bg-[#90e050]',
        'bg-[#b3e3f5]',
        'bg-[#ab5cf2]',
        'bg-[#8aff00]',
        'bg-[#bfa6a6]',
        'bg-[#f0c8a0]'
      ],
    },
  },
  theme: {
    extend: {},
  },
  plugins: [],
};

The code that requires the safelist to run is interpolating the properties into the class name of an element:

<div className={`text-white ${props.bg}`}></div>


How to locate the Constraint validation text using Selenium and Java

I'm trying to make a Selenium test script that checks if a bootstrap validation popover appears when submitting a form containing a bad value.

My script below returns this error:

org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"#amount"}

Relevant code:

WebDriver driver = new ChromeDriver()
WebElement field = driver.findElement(By.id("amount"));  //errors here every execution
Boolean is_valid = (Boolean)WebUI.executeScript("return arguments[0].checkValidity();", field);

if (!is_valid) {
     //intentionally fail test
}

When I inspect the form field, I see the id equals amount, so why am I unable to find this element in Selenium?

related question on SO

enter image description here

EDIT Here's my full script that I omitted for brevity:

WebUI.openBrowser('')

WebUI.navigateToUrl('Foo')

WebUI.setText(findTestObject('Object Repository/Page_bar/input_concat(Recipient, , s email address)_email'), 'fakepersonaluser1@example.com')

WebUI.setText(findTestObject('Object Repository/Page_bar/input_Amount (USD)_amount'), '10001')

WebUI.click(findTestObject('Object Repository/Page_bar/button_Send Payment'))

 
WebDriver driver = new ChromeDriver() WebElement field =
driver.findElement(By.id("amount")); Boolean is_valid =
(Boolean)WebUI.executeScript("return arguments[0].checkValidity();", field);

if (!is_valid) { //intentionally fail test }

WebUI.closeBrowser()


unable to install yacas with Rstudio

I already downloaded Yacas on my windows. I first installed package Ryacas directly from Rstudio but no yacas functions can be run. I later ran this command and got the following errors:

> devtools::install_github("r-cas/ryacas", build_opts = c("--no-resave-data", "--no-manual"))
Downloading GitHub repo r-cas/ryacas@HEAD
√  checking for file 'C:\Users\MyName\AppData\Local\Temp\RtmpEpL65w\remotes58d46bf01207\r-cas-ryacas-ea52235/DESCRIPTION' (684ms)
-  preparing 'Ryacas': (1.4s)
√  checking DESCRIPTION meta-information ... 
-  cleaning src
-  checking for LF line-endings in source and make files and shell scripts (1s)
-  checking for empty or unneeded directories (501ms)
-  building 'Ryacas_1.1.3.9002.tar.gz'
   
Installing package into ‘C:/Users/MyName/Documents/R/win-library/4.1’ as ‘lib’ is unspecified)
* installing *source* package 'Ryacas' ...
ERROR: cannot remove earlier installation, is it in use?
* removing 'C:/Users/MyName/Documents/R/win-library/4.1/Ryacas'
* restoring previous 'C:/Users/MyName/Documents/R/win-library/4.1/Ryacas'
Warning in file.copy(lp, dirname(pkgdir), recursive = TRUE, copy.date = TRUE) :
  problem copying C:\Users\MyName\Documents\R\win-library\4.1\00LOCK-Ryacas\Ryacas\libs\x64\Ryacas.dll to C:\Users\MyName\Documents\R\win-library\4.1\Ryacas\libs\x64\Ryacas.dll: Permission denied
Warning message:
In i.p(...) :
  installation of package ‘C:/Users/Name~1/AppData/Local/Temp/RtmpEpL65w/file58d4192555cc/Ryacas_1.1.3.9002.tar.gz’ had non-zero exit status

I have also tried these:

> system.file(package = "Ryacas", "yacdir") : install.packages('Ryacas')
Installing package into ‘C:/Users/Name/Documents/R/win-library/4.1’
(as ‘lib’ is unspecified)
trying URL 'http://cran.rstudio.com/bin/windows/contrib/4.1/Ryacas_1.1.3.1.zip'
Content type 'application/zip' length 1739652 bytes (1.7 MB)
downloaded 1.7 MB

package ‘Ryacas’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
    C:\Users\Name\AppData\Local\Temp\RtmpeGHfz9\downloaded_packages
Error in system.file(package = "Ryacas", "yacdir"):install.packages("Ryacas") : 
  argument of length 0


> yacasInstall()
Error in yacasInstall() : could not find function "yacasInstall"


> yacas("n := (10 + 2) * 5")
Error in yacas("n := (10 + 2) * 5") : could not find function "yacas"


> Set(ns, (10 + 2) * 5)
Error in Set(ns, (10 + 2) * 5) : could not find function "Set"

How to run/install ryacas?

Update:

There are two R packages for yacas. Some references online refer to the old package. Some old commands are still useful just make sure we have downloaded the old package. Also, we cannot use the yacas commands on the official manual directly with R interface. We have to do something like yacas("...") or some yacas commands specifically for R.



Using curl to download golang tarball produces strange result

I was trying to install golang on Ubuntu 21.10. This requires downloading the golang tarball and extracting it to a particular place on the filesystem. First I tried:

curl -O https://go.dev/dl/go1.17.7.linux-amd64.tar.gz

which simply created a file with following text:

<a href="https://dl.google.com/go/go1.17.7.linux-amd64.tar.gz">Found</a>.

The same command output with verbose flag :

$ curl -v https://go.dev/dl/go1.17.7.linux-amd64.tar.gz
*   Trying 216.239.32.21:443...
* Connected to go.dev (216.239.32.21) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=go.dev
*  start date: Feb 11 11:23:46 2022 GMT
*  expire date: May 12 11:23:45 2022 GMT
*  subjectAltName: host "go.dev" matched cert's "go.dev"
*  issuer: C=US; O=Google Trust Services LLC; CN=GTS CA 1D4
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55726a1bb5e0)
> GET /dl/go1.17.7.linux-amd64.tar.gz HTTP/2
> Host: go.dev
> user-agent: curl/7.74.0
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 302
< content-type: text/html; charset=utf-8
< content-security-policy: connect-src 'self' www.google-analytics.com stats.g.doubleclick.net ; default-src 'self' ; font-src 'self' fonts.googleapis.com fonts.gstatic.com data: ; frame-ancestors 'self' ; frame-src 'self' www.google.com feedback.googleusercontent.com www.googletagmanager.com scone-pa.clients6.google.com www.youtube.com player.vimeo.com ; img-src 'self' www.google.com www.google-analytics.com ssl.gstatic.com www.gstatic.com gstatic.com data: * ; object-src 'none' ; script-src 'self' 'sha256-n6OdwTrm52KqKm6aHYgD0TFUdMgww4a0GQlIAVrMzck=' 'sha256-4ryYrf7Y5daLOBv0CpYtyBIcJPZkRD2eBPdfqsN3r1M=' 'sha256-sVKX08+SqOmnWhiySYk3xC7RDUgKyAkmbXV2GWts4fo=' www.google.com apis.google.com www.gstatic.com gstatic.com support.google.com www.googletagmanager.com www.google-analytics.com ssl.google-analytics.com tagmanager.google.com ; style-src 'self' 'unsafe-inline' fonts.googleapis.com feedback.googleusercontent.com www.gstatic.com gstatic.com tagmanager.google.com ;
< location: https://dl.google.com/go/go1.17.7.linux-amd64.tar.gz
< strict-transport-security: max-age=31536000; includeSubDomains; preload
< x-cloud-trace-context: 7611c1786413210c614a80a1da377a17
< date: Thu, 24 Feb 2022 05:02:13 GMT
< server: Google Frontend
< content-length: 75
<
<a href="https://dl.google.com/go/go1.17.7.linux-amd64.tar.gz">Found</a>.

* Connection #0 to host go.dev left intact

The tarball download works fine with wget command. I tried reading more about the difference between the two and feel that the curl should have worked. I never had issues with using curl before when downloading archives for e.g., linux source tarball. I am really not sure if the issue is with the curl or the golang server. Any explanation would be helpful.



How to override DOM changes?

I am working on a demo in which the client has a modal box which display is set to none. When I click on the CTA button fadein and fadeout classes are applied, and display:none changes to display:block for 3-4 sec and then display:block is turned back into display:none I want to remove toggle display:none to display:block and remove the fadein and fadeout classes.

Is there a way I can use the console to remove the display toggle as well as removed classes using javascript

Edit: I have uploaded the DOM changes which I want to remove.

enter image description here



Formatting to get desired constraint in python-constraint

This is a constraint problem being solved with python-constraint. I have 10 hours with the following constraint: no 2 activities/tasks can occur at the same 1hr slot. I have 9 activities to be taken by 9 hours of the schedule. Then I have 3 tasks, 'x', 'y', and 'z'. I want one of those tasks to be chosen for the other 1hr slot.

This is what I have. Here I am adding the 9 activities as variables:

problem.addVariables(['Activity1','Activity2', 'Activity3', 'Activity4', 'Activity5', 'Activity6', 'Activity7', 'Activity8', 'Activity9'], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])

Here I am trying to add one of the three tasks as a variable:

d = {'x': 1, 'y': 2, 'z': 3}
for k,v in d.items():
    problem.addVariables(["%s" %(k)], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])

Here is for adding the constraint:

problem.addConstraint(AllDifferentConstraint(),['Activity1','Activity2', 'Activity3', 'Activity4', 'Activity5', 'Activity6', 'Activity7', 'Activity8', 'Activity9', "%s" %(k)])

Line to get solution:

solution = problem.getSolutions()

This is one of the outputs for the schedule, formatted as a pandas series of solution:

Activity4       1
x               2
Activity7       3
Activity3       4
Activity2       5
Activity1       6
Activity8       7
Activity9       8
Activity6       9
Activity5       10
y               10
z               10

How can I get just one of the tasks to show up in the output? I know this code is working for the one task assigned, in this case x, but then y and z were still added to the end, violating the constraint for just 1 thing to be assigned to each hour. I don't want y and z to show up in this example output because x was supposed to be the one task chosen.



I have no erors in my scripts I but receive many errors in the xlocnum, io, and the xiosbase files, I tried looking at the errors but dont understand

The errors might be showing because of the internal thing I had created and defined as static here is the build ouput

1>------ Build started: Project: C++_Game_Stuff, Configuration: Debug Win32 ------
1>win64_maybe32_idk_platform.cpp
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xiosbase(41,1): error C2159: 
more than one storage class specified
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xiosbase(93): message : see 
reference to class template instantiation 'std::_Iosb<_Dummy>' being compiled
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xiosbase(41,43): error C2059: 
syntax error: '='
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xiosbase(41,1): error C2238: 
unexpected token(s) preceding ';'
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xiosbase(112,56): error C2589: 
'static': illegal token on right side of '::'
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xiosbase(112,1): error C2062: 
type 'unknown-type' unexpected
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xlocnum(1439,73): error C2589: 
'static': illegal token on right side of '::'
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xlocnum(1555): message : see 
reference to class template instantiation 'std::num_put<_Elem,_OutIt>' being compiled
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\xlocnum(1522,73): error C2589: 
'static': illegal token on right side of '::'
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\ios(216,45): error C2059: syntax 
error: 'type'
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\ios(216,66): error C2143: syntax 
error: missing ';' before '{'
1>C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\ios(216,66): error C2447: '{': 
missing function header (old-style formal list?)
1>C:\Users\maazs\source\repos\C++_Game_Stuff\C++_Game_Stuff\win64_maybe32_idk_platform.cpp(71,5): warning C4007: 'WinMain': must be '__stdcall'
1>Done building project "C++_Game_Stuff.vcxproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

The rest of the code is just warnings about int to float conversions:



2022-02-27

Place a Window behind desktop icons using PyQt on Ubuntu/GNOME

I'm trying to develop a simple cross-platform Wallpaper manager, but I am not able to find any method to place my PyQt Window between the current wallpaper and the desktop icons using XLib (on windows and macOS it's way easier and works perfectly).

This works right on Cinnamon (with a little workround just simulating a click), but not on GNOME. Can anyone help or give me any clue? (I'm providing all this code just to provide a minimum executable piece, but the key part, I guess, is right after 'if "GNOME"...' sentence)

import Xlib
import ewmh
import pygetwindow
from pynput import mouse

DISP = Xlib.display.Display()
SCREEN = DISP.screen()
ROOT = DISP.screen().root
EWMH = ewmh.EWMH(_display=DISP, root=ROOT)

def sendBehind(self, hWnd):

        w = DISP.create_resource_object('window', hWnd)
        w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_BELOW', False), ], Xlib.X.PropModeReplace)
        w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_SKIP_TASKBAR', False), ], Xlib.X.PropModeAppend)
        w.change_property(DISP.intern_atom('_NET_WM_STATE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_STATE_SKIP_PAGER', False), ], Xlib.X.PropModeAppend)
        DISP.flush()

        # This sends window below all others, but not behind the desktop icons
        w.change_property(DISP.intern_atom('_NET_WM_WINDOW_TYPE', False), Xlib.Xatom.ATOM, 32, [DISP.intern_atom('_NET_WM_WINDOW_TYPE_DESKTOP', False), ],Xlib.X.PropModeReplace)
        DISP.flush()

        if "GNOME" in os.environ.get('XDG_CURRENT_DESKTOP', ""):
            # This sends the window "too far behind" (below all others, including Wallpaper, like unmapped)
            # Trying to figure out how to raise it on top of wallpaper but behind desktop icons
            desktop = _xlibGetAllWindows(title="gnome-shell")
            if desktop:
                w.reparent(desktop[-1], 0, 0)
                DISP.flush()

        else:
            # Mint/Cinnamon: just clicking on the desktop, it raises, sending the window/wallpaper to the bottom!
            m = mouse.Controller()
            m.move(SCREEN.width_in_pixels - 1, 100)
            m.click(mouse.Button.left, 1)

            return '_NET_WM_WINDOW_TYPE_DESKTOP' in EWMH.getWmWindowType(hWnd, str=True)

def _xlibGetAllWindows(parent: int = None, title: str = "") -> List[int]:

    if not parent:
        parent = ROOT
    allWindows = [parent]

    def findit(hwnd):
        query = hwnd.query_tree()
        for child in query.children:
            allWindows.append(child)
            findit(child)

    findit(parent)
    if not title:
        matches = allWindows
    else:
        matches = []
        for w in allWindows:
            if w.get_wm_name() == title:
                matches.append(w)
    return matches

hWnd = pygetwindow.getActiveWindow()
sendBehind(hWnd._hWnd)


android studio sdk messed up

i'm trying to install android studio. i downloaded it from this link: https://r5---sn-p5qs7nzk.gvt1.com/edgedl/android/studio/ide-zips/2021.1.1.21/android-studio-2021.1.1.21-linux.tar.gz. i also downloaded the sdk from here: https://dl.google.com/android/repository/platform-tools_r33.0.0-linux.zip. i untared android studio and in the bin folder, i ran: chmod +x studio.sh && ./studio.sh it bringed up the android studio wizard &... when it got to the sdk part, i couldn't select android sdk tools because it said (installed) but it isn't. i know what you might say: Android Studio comes with the sdk!NO! IT DOESN'T! when i try to change the sdk directory it says: Nothing to do! Android SDK is up to date. SDK emulator directory is missing. what should i do?



Drop-down not updating when calling RaisePropertyChanged (Prism, BindableBase, WPF)

I want to update the data on drop-down in a view from my view-model. The data is updated in some objects in my model. I understood that the INotifyPropertyChanged interface only works on view-model level. But I thought when the view-model subscribe to an event that I can update my view by raising the RaisePropertyChanged for the object in the view-model which needs to be updated.

I call the UpdateConfiguration() which triggers to fetch the new data. Becacuse the function PutSetupWorkspaceConfiguration() is time consuming it returns the the first data and then compute the rest in the background. After its done its rasing the IsChanged event which works. So the method OnListChanged() gets called but then the gui does not update.

What am I doing wrong since the event and the getter gets called?

/*viewmodel*/
private SetupWorkspaceConfiguration setupWorkspaceConfiguration;

public ObservableCollection<TestingModel> TestingModels
{
   get => setupWorkspaceConfiguration?.TestingModels; //The event calls getter but the gui does not update the data
}

public async Task UpdateConfiguration(int? projectId)
{
   setupWorkspaceConfiguration = await Task.Run(async () => await setupWorkspaceConfigurationStore.PutSetupWorkspaceConfiguration(projectId));            
   setupWorkspaceConfiguration.IsChanged -= OnListChanged;
   setupWorkspaceConfiguration.IsChanged += OnListChanged;
}
private void OnListChanged(object? sender, System.EventArgs e)
{
    RaisePropertyChanged(nameof(TestingModels));
}

EDIT: The elements in the list setupWorkspaceConfiguration.TestingModels will updated. This is working fine. But when i add new elements to the list this will not be shown in the dropdown.



Compare two different images and find the differences

I have a webcam which takes pictures of a concrete slab. Now I want to check if there are objects on the slab or not. The objects could be anything and accordingly cannot be enumerated in a class. Unfortunately I cannot compare the webcam image directly with an image without objects on the concrete slab, because the image of the camera could shift minimally in x and y direction and the lighting is also not always the same. So I cannot use cv2.substract. I would prefer a foreground and background substract, where the background is just my concrete slab and the foreground is then the objects. But since the objects don´t move but lie still on the slab, I can´t use cv2.createBackgroundSubtractorMOG2() either.

The Pictures look like this:

The Concrete slap without any objects:

enter image description here

The slap with Objects:

enter image description here enter image description here



Make Hidden child's relation /Laravel Eloquent

Model structure:

  • AccessoryGroup (hasMany: accessories)
  • Accessory (belongsTo: accessory_group)

Get all accessory groups with accessories (with accesory_group)

In accessories I needed accessory_group relation, to create some custom attribute (appends)

But after usage I don't won't my Api to return relation from accessories->accessory_group

AccessoryGroup
            ::with([
                'accessories' => function ($query) {
                    $accessory = $query->getRelated();
                    // Can I somehow use this $accessory relation for my problem
                    // something like $accessory->makeHidden('accessory_group'); - not working
                },
                'accessories.accessory_group',
            ])
            ->get();

when i add public $hidden = ['accesory_group']; in Accessory model I get what i want, but then it's always hidden (always need to use makeVisible)



EASY PYTHON SELENIUM: How do I download an mp4 WITHOUT using urllib?

I'm trying to download this video: https://www.learningcontainer.com/wp-content/uploads/2020/05/sample-mp4-file.mp4

I tried the following but it doesn't work.

link = "https://www.learningcontainer.com/wp-content/uploads/2020/05/sample-mp4-file.mp4"
urllib.request.urlretrieve(link, 'video.mp4')

I'm getting:

urllib.error.HTTPError: HTTP Error 403: Forbidden

Is there another way to download an mp4 file without using urllib?



How to set multisample renderpass with depth attachment?

Firstly i have taking no error from validation layers and multisample is working. But when i copy depth resolved image , I understand that it is empty.

When copy depth msaa image , I saw that it is not empty.( but as expected validation layers gave error because i copied VK_SAMPLE_COUNT_4_BIT to VK_SAMPLE_COUNT_1_BIT but still worked interesting . This is how I found out that there is no problem in depth msaa image)

Therefore here should be depth resolving problem , where i am making mistake :

VkAttachmentDescription colorAttachment_MSAA{};
colorAttachment_MSAA.samples = VK_SAMPLE_COUNT_4_BIT;
...

VkAttachmentDescription colorAttachment_Resolve{};
colorAttachment_Resolve.samples = VK_SAMPLE_COUNT_1_BIT;
...

VkAttachmentDescription depthAttachment_MSAA{};
depthAttachment_MSAA.samples = VK_SAMPLE_COUNT_4_BIT;
...

VkAttachmentDescription depthAttachment_Resolve{};
depthAttachment_Resolve.samples = VK_SAMPLE_COUNT_1_BIT;
depthAttachment_Resolve.loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
depthAttachment_Resolve.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
depthAttachment_Resolve.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
depthAttachment_Resolve.finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
...

VkAttachmentReference colorAttachment_MSAA_Ref{};
colorAttachment_MSAA_Ref.attachment = 0;
colorAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;

VkAttachmentReference depthAttachment_MSAA_Ref{};
depthAttachment_MSAA_Ref.attachment = 1;
depthAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;

VkAttachmentReference colorAttachment_Resolve_Ref{};
colorAttachment_MSAA_Ref.attachment = 2;
colorAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;

//i created depthAttachment_Resolve_Ref but I couldn't find a place to use.
VkAttachmentReference depthAttachment_Resolve_Ref{};
depthAttachment_MSAA_Ref.attachment = 3;
depthAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;

VkSubpassDescription subpass{};
subpass.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
subpass.colorAttachmentCount = 1;
subpass.pColorAttachments = &colorAttachment_MSAA_Ref;
subpass.pDepthStencilAttachment = &depthAttachment_MSAA_Ref;
subpass.pResolveAttachments = &colorAttachment_Resolve_Ref;

VkSubpassDependency dependency{};
dependency.srcSubpass = VK_SUBPASS_EXTERNAL;
dependency.dstSubpass = 0;
dependency.srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT;
dependency.srcAccessMask = 0;
dependency.dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT;
dependency.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;

array<VkAttachmentDescription, 4> attachments = { colorAttachment_MSAA , depthAttachment_MSAA,
colorAttachment_Resolve, depthAttachment_Resolve};

VkRenderPassCreateInfo renderPassInfo{};
renderPassInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
renderPassInfo.attachmentCount = static_cast<uint32_t>(attachments.size());
renderPassInfo.pAttachments = attachments.data();
renderPassInfo.subpassCount = 1;
renderPassInfo.pSubpasses = &subpass;
renderPassInfo.dependencyCount = 1;
renderPassInfo.pDependencies = &dependency;


2022-02-26

Regex, All before an underscore, and all between second underscore and the last period?

How do I get everything before the first underscore, and everything between the last underscore and the period in the file extension?

So far, I have everything before the first underscore, not sure what to do after that.

.+?(?=_)

EXAMPLES:

  • 111111_SMITH, JIM_END TLD 6-01-20 THR LEWISHS.pdf
  • 222222_JONES, MIKE_G URS TO 7.25 2-28-19 SA COOPSHS.pdf

DESIRED RESULTS:

  • 111111_END TLD 6-01-20 THR LEWISHS
  • 222222_MIKE_G URS TO 7.25 2-28-19 SA COOPSHS


Error"utf-8' codec can't decode byte 0x96 in Python using MacOS

import pandas as pd,re,emoji,os,string
from textblob import TextBlob
import dateutil.parser as dparser

keyword_bank = None
keyword_files_paths = []


def getSubjectivity(text):
    return TextBlob(text).sentiment.subjectivity
def getPolarity(text):
    return TextBlob(text).sentiment.polarity
def deEmojify(text):
    text = str(text).encode("utf-8") 
    text =  emoji.get_emoji_regexp().sub(r'', text.decode('utf-8'))
    try:
        text = re.sub(r'\w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))*', '', text)
    except:pass
    
    return text
def extractDateFromString(text):
    try:
        return dparser.parse(str(text),fuzzy=True).date()
    except:
        return text
def getWordCount(text):
    try:
        return len(str(text).split())
    except:
        len(str(text))
               
def filterText(text): 
    newstr = ''
    string1 = text 
    for char in text:
        if str(char).isalpha():
            newstr = newstr+char
        else:
            newstr = newstr+" "
            
    return newstr.strip().replace("  "," ").replace("@",'')
    # for index,x in enumerate(string1):
    #     if index<len(string1)-1 and index>0:
    #         if str(string1[index-1]).isnumeric() or  str(string1[index+1]).isnumeric() :
    #             continue
    #     newstr = newstr +x
    # chrs = ['(',')','-',';','%',*[str(x) for x in range(0,10)], '\n','  ']
    # for x in chrs:
    #     newstr = newstr.replace(x,' ') 
    # return newstr.strip()  

def readTweetsFile(path):
    file_name = os.path.basename(path)
    if '.xlsx' in str(file_name).lower():
        df = pd.read_excel(path)
    elif '.csv' in str(file_name).lower():
        df = pd.read_csv(path)
    elif '.txt' in  str(file_name).lower():
        tweets = str(open(path, 'r', ).read()).strip().replace('\n',' ')
        tweets = filterText(text=tweets)
        tweets = [str(x).strip() for x in tweets.split('.')]
        df = pd.DataFrame(tweets,columns=['Content'])
    return df

def getKeywordBankDict(paths):
    keyword_bank = {}
    for path in paths:
        file_name = os.path.basename(path)
        if '.txt' in file_name:
            with open(path, 'r', encoding="utf-8") as file: 
                lines = [str(x).strip()  for x in file.readlines()]
                lines = list(dict.fromkeys([x for x in lines if bool(x)]))
                keyword_bank[file_name] = lines

    return keyword_bank  

def checkOccurance(text,file_name):
    keywords = keyword_bank[file_name]
    occurance = [word for word in keywords if str(word).strip().lower() in str(text).strip().lower()]
    if len(occurance)>0:
        return occurance[0] 
    
    return 0


results_folder_path = None
def checkResultsFolder(): 
    global results_folder_path
    results_folder_path = os.path.join( os.getcwd() , 'Results')
    if not os.path.exists(results_folder_path):
        os.makedirs(results_folder_path)


checkResultsFolder()








def tweetFileReportGenerator(path,keyword_files_paths,company_name=None,date=None,is_text_file=False):
    checkResultsFolder()
    df = readTweetsFile(path)
    # df = df.head(800)
    df['Content'] = df['Content'].apply(deEmojify)
 
    if not is_text_file:
        df = df.drop(columns=['PostID','Ticks', 'TweetUrl','PostID','RetweetNum', 'UserHandle', 'LikeNum', 'UserID', 'UserUrl','Location',])
        df['Time'] = df['Time'].apply(extractDateFromString)
        df.columns = df.columns.str.replace('UserName', 'Company Name') 
        df.columns = df.columns.str.replace('Time', 'Date') 
        # df.columns = df.columns.str.replace('Content', 'Tweet') 
    else:
        df['Date'] = [str(date)]*len(df['Content'])
        df['Company Name'] = [str(company_name)]*len(df['Content'])
 
    df['Length'] = df['Content'].apply(getWordCount)
    df['Polarity'] = df['Content'].apply(getPolarity)
    df['Subjectivity'] = df['Content'].apply(getSubjectivity)
    
    
    for path in keyword_files_paths[:]:
        file_name = os.path.basename(path)
        df[str(file_name).split('.')[0]] = df['Content'].apply(checkOccurance, args=[str(file_name)])

    # df.columns = df.columns.str.replace('Content', 'Tweet')
    
    
    df_columns = list(df.columns)
    df = df.groupby('Company Name') 
    df = [{'file_name':name,'data':list(data.values.tolist())} for name,data in df]
    
    for dataset in df:
        file_name = dataset['file_name']
        data = dataset['data']
        file_path = os.path.join(results_folder_path,file_name)
        new_df = pd.DataFrame(data=data,columns=df_columns)
        new_df.to_csv(file_path+'.csv',index=False)




# Interface
print("1. CSV or Excel")
print("2. Text file")
tweet_data_file_type = int(input("Enter file type [1,2] = "))
while tweet_data_file_type not in [1,2]:
    print("1. CSV or Excel")
    print("2. Text file")
    tweet_data_file_type = int(input("Enter file type [1,2] = "))


tweet_data_file_path = input("Enter tweets data file path = ")
while not os.path.exists(tweet_data_file_path):
    print("* Enter a valid path for tweets data file")
    tweet_data_file_path = input("Enter tweets data file path = ")


keyword_files_paths = []
total_keyword_files = int(input("Enter total number of keyword files = "))
while total_keyword_files < 1 :
    print("* Enter a valid number > 0")
    total_keyword_files = int(input("Enter total number of keyword files = "))

file_index = 1
while file_index <=total_keyword_files:
    path = input(f"Enter path for keyword file {file_index} = ")
    if path in keyword_files_paths:
        print(f"* Path ({path}) is already provided !")
        continue
    while  not os.path.exists(path):
        print(f"* Enter a valid path for keyword file {file_index}")
        path = input(f"Enter path for keyword file {file_index} = ")
    file_index = file_index+1
    keyword_files_paths.append(path)    

if tweet_data_file_type == 2:
    company_name = input("Enter company name = ")
    date = input("Enter date = ")

keyword_bank = getKeywordBankDict(paths=keyword_files_paths[:])

print("Processing ...")
if tweet_data_file_type == 2: 
    tweetFileReportGenerator(path=tweet_data_file_path,keyword_files_paths=keyword_files_paths,company_name=company_name,date=date,is_text_file=True)
else:
    tweetFileReportGenerator(path=tweet_data_file_path,keyword_files_paths=keyword_files_paths)

 

I replaced utf-8 with different options but it still not working any advise please? I am using MacOs... anychanse to fix this please? This my first day using stackoverflow sorry if my post is not professional as I am new in programming and not aware of fixing errors. Thank you so much for your kind support



How to find_elements with multiple attributes using cssSelector Python and Selenium

I'm trying to find all the elements that have <a href=....> inside the <div id="contents" ...>

Here is the HTML code:

<div id="contents" class="style-scope">
    <div id="dismissible" class="style-scope">
        <a id="thumbnail" href="http://www.test.com">

Here is my python code:

items = driver.find_elements(By.cssSelector("div[id="contents"] a[id="thumbnail"]"))

It gives me the error:

SyntaxError: invalid syntax. Perhaps you forgot a comma?

Where do I need to put the comma?



Multiplying columns in a visual in Power BI

If I have a matrix visual where one column is a percentage of column total (where filters can change the output), how can I multiply that by another column within the matrix?

For example, if I’m multiplying the forecast error of an item (column in an existing table) by the percentage of sales dollars (which would appear as “%CT sales dollars)?

Edit: adding screenshots.

In the first screenshot you see that we have "products" and their sales dollars are marked as a percentage of the column total.

In the second screenshot, you see that the column still totals 100%, even if the matrix is filtered.

How do we create a third column that multiplies the two numeric columns together? IE- dollar percentage times forecast error percentage?

Screenshot 1

Screenshot 2



What is the difference between CSVLogger and model.evaluate()?

I have been using Keras/TF's CSVLogger in order to save the training and validation accuracies. Then I was plotting those data to check the trajectory of training and validation accuracy or loss.

Yesterday, I read this link.

Here, they used model.evaluate() to plot the accuracy metric.

What is the difference between these two approaches?



Reduce size of snowflake-connector-python[pandas] module

I am trying to create a lambda function in AWS which connects to a Snowflake database. For this I need the snowflake-connector-python[pandas] package (https://docs.snowflake.com/en/user-guide/python-connector-pandas.html), which together with all of its dependencies has a size of over 250 MB uncompressed (around 280 MB). This is an issue because AWS lambda allows a maximum of 250 MB of dependencies (using AWS layers).

The size of the package is quite surprising, looking at the dependencies the biggest offenders are pyarrow (around 80 MB), pandas (around 60 MB), and numpy (around 40 MB). Is there a way to reduce the size of the whole package, installing only the relevant parts, so as to reduce the size to below 250 MB? Namely I need to be able to connect, read, and write to Snowflake, nothing fancy.

I know that there are other options in these cases, such as containers, however I would like to avoid this if possible.



Logging in Postgres sql from cmd using a specific post

How can i use the command prompt to log into Postgres installation? I am trying to use this command on C:\Program Files\PostgreSQL\14\data directory

psql -h localhost -p 5433

Since there is some other program which has another installation of an earlier release of Postgres on port 5432, thus port 5433. But it is producing this error:

psql: error: connection to server at "localhost" (::1), port 5433 failed: FATAL:  role "user1" does not exist

https://docs.bitnami.com/virtual-machine/infrastructure/postgresql/administration/connect-remotely/



2022-02-25

How do I cleanly delete elements from a Sidebar List in SwiftUI on macOS

I would like to give users the option to delete List elements from a SwiftUI app's sidebar in macOS.

Here's what I currently have:

import Foundation
import SwiftUI

@main
struct FoobarApp: App {

    @StateObject private var modelData = ModelData()

    var body: some Scene {
        WindowGroup {
            ContentView()
                .environmentObject(modelData)
        }
        .commands {

        }
    }
}

class ModelData: ObservableObject {
    @Published var myLinks = [URL(string: "https://google.com")!, URL(string: "https://apple.com")!, URL(string: "https://amazon.com")!]
}


struct ContentView: View {

    @EnvironmentObject var modelData: ModelData

    @State private var selected: URL?

    var body: some View {
        NavigationView {

            List(selection: $selected) {
                Section(header: Text("Bookmarks")) {

                    ForEach(modelData.myLinks, id: \.self) { url in
                        NavigationLink(destination: Text("Viewing detailed data for: \(url)") ) {
                            Text(url.absoluteString)
                        }
                        .tag(url)
                    }

                }
            }
            .onDeleteCommand {
                if let selection = selected {
                    modelData.myLinks.remove(at: modelData.myLinks.firstIndex(of: selection)!)
                }
            }
            .toolbar {
                Button("Selected") {
                    print(selected ?? "Nothing selected")
                }
            }

            Text("Choose a link")

        }
        .frame(minWidth: 200, minHeight: 500)
    }
}

When I select one of the sidebar links and press the delete button on my keyboard, the link does get deleted, but the detail view doesn't always get cleared.

Here's a gif of what I'm referring to:

gif of weird behavior

I'd like for the detail view to revert back to the default text, Choose a link, after the user deletes a link. Is there a way to fix this?

Also, I noticed that ContentView.selected doesn't get cleared after a link gets deleted. Is this expected? (I included a Selected button which prints the contents of this variable)

I'm using macOS 12.2.1 & Xcode 13.2.1.

Thanks in advance.



Add PowerShell package as native dependency

How should one add the PowerShell package as a Native dependency? In spite of adding it one is unable to run basic PowerShell commands to show output on the screen. PFB REPL link.

https://blazorrepl.telerik.com/GQacljYn3647KP3G59

The HTML and C# Hello World are working. However, the PowerShell Hello World doesn’t work.

PowerShell 'Hello World' not working



Sharing node_modules folder between lambda using Lambda Layers + Cloud Formation

I have a project that uses serverless-framework (this) to define the AWS resources to be used. I have the various .yml files that describe each resource that the project needs to run.

Recently, I've had to install several NPM packages for my lambdas and they've become very large in megabytes (>3MB), so the code is no longer viewable from the console.

Since including node_modules in each lambda is not a best practice and they are very heavy this way, I was wondering about using a Lambda Layer to share node_modules between lambdas.

As .yml I have a shared structure between all of them called provider.yml, something like:

name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
region: ${opt:region, "eu-central-1"}
stage: ${opt:stage, "dev"}
profile: ${self:custom.profiles.${self:provider.stage}}
deploymentBucket:
 name: avatar-${self:provider.stage}-deploymentbucket
 versioning: true
 blockPublicAccess: true
environment:
 EXAMPLE_ENV_VAR: ${self:custom.baseResource}-envvar
USERPOOL:
 'Fn::ImportValue': ${self:custom.baseResource}-userPoolId
APP_CLIENT_ID:
 'Fn::ImportValue': ${self:custom.baseResource}-userPoolClientId
iamRoleStatements:
 - Effect: Allow
 Action:
  - dynamodb:Query
  - ...
Resource: "*"

Then I have a file that includes the provider.yml and all the lambdas (called serverless.yml) and is the file that I use to deploy:

service: listOfLambdas
app: appname
frameworkVersion: '2'
projectDir: ../../

provider: ${file(../../resources/serverless-shared-config/provider.yml)}

package:
 individually: true
 exclude:
  - "**/*"

functions:
 - ${file(./serverless-functions.yml)}

Finally, I have the serverless-functions.yml that contains the Lambdas structure:

lambdaName:
handler: src/handlers/auth/example.run
name: ${self:custom.base}lambdaName
description: Lambda description
events:
 - httpApi:
     method: POST
     path: /api/v1/example
package:
 include:
   - ../../node_modules/**

This includes the node_modules folder in the Lambda.

How can I create a resource with a YML template managed by Cloud Formation to create a Lambda Layer to which I can assign all my lambdas so that they share the node_modules folder. I expect to have to create a new serveless.yml inside the resources folder with the CloudFormation YML template to bind I guess somehow to my lambdas.

I need it to be managed by CloudFormation, so I can have a common stack with all the resources used by the project and so I can deploy it at start-up.

Where should the node_modules then be foldered in the project? Now the project is something like:

Root
|_ lib
|_ node_modules
|_ resources
   |_serverless-shared-config
     |_provider.yml
   |_s3
     |_serverless.yml
   |_apigateay 
   |_ ... 
|_ services
   |_ lambda
      |_ src
         |_ index.js
         |_ ...
      |_ serverless.yml
      |_ serverless-functions.yml

Where can I find a template in order to solve this problem? Or what is the best practice in order to share the node_modules between Lambdas?



Accessing running container (inside VM) from host OS

I have set up 3 Linux VM through multipass on my host Mac OSX. I have installed docker on these machines and started a 3 node docker swarm.

docker swarm init

After that, I have created 2 services one is for Postgres DB and the other is for drupal with the following command.

docker service create -d --name postgras-db --network test-swarm-network -e POSTGRES_PASSWORD=<password> postgres

docker service create -d  --name drupal-frontend --network test-swarm-network -p 80:80 drupal

Port 80:80 is exposed for drupal and both of these services are connected to the same overlay network. My services are up and running.

If I am doing everything on the host machine then I will simply do http://localhost:80 or http://localhost to get the desired output but now when running it in VM how I can test the drupal front from host MAC, i.e which IP to hit in the host browser to get the desired result.

P.S: There is nothing specific here about drupal (it could be any other container like Nginx etc), the question is about accessing running container (inside VM) from host OS



Display is good but the software may have some problem, because hardware is good

enter image description here

The system is crashing again and again with out any reason.



2022-02-24

How to sort an array by a number inside w/ javascript [duplicate]

So I'm having trouble sorting an array. I'm trying to make it descend from the highest average to the lowest. This is the table ->

[
    {
        "average": "86.8",
        "user": "User1"
    },
    {
        "average": "93",
        "user": "User2"
    },
    {
        "average": "91.5",
        "user": "User3"
    }
]

This is how I set the array

let usr = []
if (users.docs.length > 0) {
    for (const user of users.docs) {
        let exportable = user.data()
        usr.push(exportable)
    }
}


How to get a line count of all individual files in a directory on AWS S3 using a terminal?

I am new to terminal commands. I know we can do something like wc -l directory/* if the files were local. But how do I achieve the same on AWS S3 using a terminal? The output should be the file name and the count.

For example, there are two files present in a directory in S3 - 'abcd.txt' (5 lines in the file) and 'efgh.txt' (10 lines in the file). I want the line counts of each file without downloading the files, using terminal. Output - 'abcd.txt' 5 'efgh.txt' 10



How to manipulate the ScrollController in Flutter's sliding up panel plugin?

I'm using Flutter's sliding_up_panel plugin.

I want to scroll the panel to the top when a new item is selected from my app drawer. Presently selecting a new item closes the panel and refreshes the panel content. Then opens it to a 200px peak, but it doesn't reset the panel's Scroll location to the top.

I've been going around in circles trying the same solutions in slightly different ways and getting nowhere.

What I've tried: I have global

  PanelController slidingPanelController = new PanelController();
  ScrollController slideUpPanelScrollController = new ScrollController();

I tried attaching my global slideUpPanelScrollController to my panel's listview, but when swiping up the panel's ListView it simultaneously starts closing the whole panel. If you were scrolling up to read the content you'd skimmed, well, you're not able to because it's disappearing.

Preventing this bug is easy, you do it the canonical way from the plugin's examples, pass the ScrollController through from SlidingPanel and therefore create a local ScrollController in the panel's Listview.

panelBuilder: (slideUpPanelScrollController) => _scrollingList(presentLocation, slideUpPanelScrollController)

The problem then is, you can't scroll the panel on new App drawer selections, because the controller is now local.

I tried putting a listener on the local listview ScrollController, _slideUpPanelScrollController and testing for panelController.close():

if(slidingPanelController.isPanelClosed) _slideUpPanelScrollController.jumpTo(0);

But the listener blocked the panel from swiping, swipe events fired but the panel didn't swipe, or was extremely reluctant too.

Having freshly selected content open in the panel displaying content halfway down the ListView is a glitchy user experience. I would love some ideas or better solutions.

I need it so when the panel is closed, I can slideUpPanelScrollController.jumpTo(0);

I need the global controller to attach to the panel ListView's local controller, or I need a way to access the local controller to fire its Scroll from outwith my _scrollingList() function.

Here's the panel Widget:

  SlidingUpPanel(
    key: Key("slidingUpPanelKey"),
    borderRadius: slidingPanelBorderRadius,
    parallaxEnabled: false,
    controller: slidingPanelController,
    isDraggable: isDraggableBool,
    onPanelOpened: () {
    },
    onPanelSlide: (value) {
      if (value >= 0.98)
        setState(() {
          slidingPanelBorderRadius =
              BorderRadius.vertical(top: Radius.circular(16));
        });
    },
    onPanelClosed: () async {
      setState(() {
        listViewScrollingOff = true;
      });
      imageZoomController.value =
          Matrix4.identity(); // so next Panel doesn't have zoomed in image

      slidingPanelBorderRadius =
          BorderRadius.vertical(top: Radius.circular(16));
    },
    minHeight: panelMinHeight,
    maxHeight:
        MediaQuery.of(context).size.height - AppBar().preferredSize.height,
    panelBuilder: (slideUpPanelScrollController) => _scrollingList(presentLocation, slideUpPanelScrollController),
    body: ...

Here's the _scrollingList Widget:

  Widget _scrollingList(LocationDetails presentLocation, ScrollController _slideUpPanelScrollController ) {
return Center(
    child: ConstrainedBox(
        constraints: const BoxConstraints(maxWidth: 600),
        child: ListView(
            controller: _slideUpPanelScrollController,
            physics: listViewScrollingOff
                ? const NeverScrollableScrollPhysics()
                : const AlwaysScrollableScrollPhysics(),
            key: Key("scrollingPanelListView"),
            children: [

This is my onTap from my Drawer ListView item:

onTap: () {
  if(slidingPanelController.isPanelShown) {
  //slideUpPanelScrollController.jumpTo(0);
  slidingPanelController.close();
}

Love help! Below is I think a minimum viable problem. I wrote it in Dartpad, but sharing from Dartpad is nontrivial, so I've copied and pasted it here. Dartpad doesn't support the plugin anyway so it's not like you could tweak it there.

import 'package:flutter/material.dart';
import 'package:sliding_up_panel/sliding_up_panel.dart';

const Color darkBlue = Color.fromARGB(255, 18, 32, 47);

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  PanelController slidingPanelController = new PanelController();
  ScrollController slideUpPanelScrollController = new ScrollController();
  
  final String title = "sliding panel";
  
  String panelContent = "";
  String stupidText = "";
  String stupidText2 = ""
    
  int panelMinHeight = 0;
  int teaserPanelHeight = 77;
  
  bool listViewScrollingOff = false;
  
  initState() {
    super.initState();
    for(int i = 0; i < 500; i++) {
      stupidText += "More stupid text. ";
    }
    
     for(int i = 0; i < 500; i++) {
      stupidText2 += "More dumb, dumbest text. ";
    }
  }
  
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      theme: ThemeData.dark().copyWith(
        scaffoldBackgroundColor: darkBlue,
      ),
      debugShowCheckedModeBanner: false,
      home: Scaffold(
              appBar: AppBar(title: Text(title)),
                drawer: Drawer(
  child: ListView(
    padding: EdgeInsets.zero,
    children: [
      const DrawerHeader(
        decoration: BoxDecoration(
          color: Colors.blue,
        ),
        child: Text('Drawer Header'),
      ),
      ListTile(
        title: const Text('Item 1'),
        onTap: () {
          if(slidingPanelController.isPanelShown) {
            print('attempting to scroll to top and close panel');
            //slideUpPanelScrollController.jumpTo(0);
            slidingPanelController.close();
          }
          Navigator.of(context).pop();
          setState() {
            panelContent = stupidText1;
            panelMinHeight = teaserPanelHeight;
          }
        },
      ),
      ListTile(
        title: const Text('Item 2'),
        onTap: () {
          if(slidingPanelController.isPanelShown) {
            //slideUpPanelScrollController.jumpTo(0);
            slidingPanelController.close();
          }
          Navigator.of(context).pop();
          setState() {
            panelContent = stupidText2;
            panelMinHeight = teaserPanelHeight;
          }
        }
      ),
    ],
  ),
),
        body: SlidingUpPanel(
        key: Key("slidingUpPanelKey"),
        borderRadius: 8,
        parallaxEnabled: false,
        controller: slidingPanelController,
        isDraggable: true,
        onPanelOpened: () async {
          setState(() {
            listViewScrollingOff = false;
            panelMinHeight = 0;
            animatedMarkerMap;
            //slideUpPanelScrollController.jumpTo(0);
          });
        },
        onPanelSlide: (value) {
          print("onPanelSlide: attempting to scroll panel");
        },
        onPanelClosed: () async {
          setState(() {
            //slideUpPanelScrollController.jumpTo(0);
            listViewScrollingOff = true;
          });
        },
        minHeight: panelMinHeight,
        maxHeight:
            MediaQuery.of(context).size.height - AppBar().preferredSize.height,
        // TODO BUG
        // SAM, IF I USE PANELBUILDER's ScrollController attached to the panel's ListView, then, when closing, the ListView will move to the top first, then the panel closes,
        // however ListView's controller is set to a globalController, this causes a bug when closing the panel, but means you can open/peek the panel from the App drawer,
        panelBuilder: (slideUpPanelScrollController) => _scrollingList(panelContent, slideUpPanelScrollController),
        
        body: Center(
          child: Text(
      'Hello, World!',
      style: Theme.of(context).textTheme.headline4,
    ),
        ),
          ),
      ),
    );
  }
  
    Widget _scrollingList(String panelContent, ScrollController _slideUpPanelScrollController ) {
    return Center(
        child: ConstrainedBox(
            constraints: const BoxConstraints(maxWidth: 600),
            child: ListView(
                controller: _slideUpPanelScrollController,
                physics: listViewScrollingOff
                    ? const NeverScrollableScrollPhysics()
                    : const AlwaysScrollableScrollPhysics(),
                key: Key("scrollingPanelListView"),
                children: [Text(panelContent)])));
    }
}

OK, so the problem became that when I closed the sliding panel 'naturally', by scrolling back up the panel to it top then sliding the panel down, well both things happened at once.

I've found how to solve this, I need to set SlidingUpPanel's isDraggable property to false, till the user has scrolled to the top of the panel.

Like so...

      @override
      void initState() {
        super.initState();
    
        slideUpPanelScrollController.addListener(() {
          if(slideUpPanelScrollController.offset == 0) {
            setState(() {
              isDraggableBool = true;
            });
          }
        });
}

The shortfall of this approach is the listener is running its test whenever the panel is Scrolled, could it jank the scroll? Is there a better/clearer/more performant way?

For completion I amended setScrollBehaviour to this:

      void setScrollBehavior(bool _canScroll, {resetPos = false}) {
    setState(() {
      canScroll = _canScroll;
      isDraggableBool = !_canScroll;
      if (resetPos) {
        slideUpPanelScrollController.jumpTo(0);
        isDraggableBool = true;
      }
    });
  }

So when the user can scroll they can't drag. When the panel closes, resetPos == true therefore the panel scrolls to the top AND it can be dragged (slid) once more.



Conditionnal call of a FastApi Model

I have a multilang FastApi connected to MongoDB. My document in MongoDB is duplicated in the two languages available and structured this way (simplified example):


{
  "_id": xxxxxxx,
  "en": { 
          "title": "Drinking Water Composition",
          "description": "Drinking water composition expressed in... with pesticides.",
          "category": "Water", 
          "tags": ["water","pesticides"] 
         },
  "fr": { 
          "title": "Composition de l'eau de boisson",
          "description": "Composition de l'eau de boisson exprimée en... présence de pesticides....",
          "category": "Eau", 
          "tags": ["eau","pesticides"] 
         },  
}

I therefore implemented two models DatasetFR and DatasetEN, each one make references with specific external Models (Enum) for category and tags in each lang .

class DatasetFR(BaseModel):
    title:str
    description: str
    category: CategoryFR
    tags: Optional[List[TagsFR]]

# same for DatasetEN chnaging the lang tag to EN 

In the routes definition I forced the language parameter to declare the corresponding Model and get the corresponding validation.


@router.post("?lang=fr", response_description="Add a dataset")
async def create_dataset(request:Request, dataset: DatasetFR = Body(...), lang:str="fr"):
    ...
    return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_dataset)

@router.post("?lang=en", response_description="Add a dataset")
async def create_dataset(request:Request, dataset: DatasetEN = Body(...), lang:str="en"):
    ...
    return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_dataset)

But this seems to be in contradiction with the DRY principle. So, I wonder here if someone knows an elegant solution to: - given the parameter lang, dynamically call the corresponding model.

Or if we can create a Parent Model Dataset that takes the lang argument and retrieve the child model Dataset.

This would incredibly ease building my API routes and the call of my models and mathematically divide by two the writing...



Invalid Argument Error / Graph Execution Error

I'm having multiple errors while running this VGG training code (code and errors shown below). I don't know if its because of my dataset or is it something else.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics.pairwise import cosine_similarity
import os
import scipy

train_directory = 'sign_data/train' #To be changed
test_directory = 'sign_data/test' #To be changed

train_datagen = ImageDataGenerator(
    rescale = 1./255,
    rotation_range = 0.1,
    width_shift_range = 0.2,
    height_shift_range = 0.2,
    shear_range = 0.1
)

train_generator = train_datagen.flow_from_directory(
    train_directory,
    target_size = (224, 224),
    color_mode = 'rgb',
    shuffle = True,
    batch_size=32
    
)


test_datagen = ImageDataGenerator(
    rescale = 1./255,
)

test_generator = test_datagen.flow_from_directory(
    test_directory,
    target_size = (224, 224),
    color_mode = 'rgb',
    shuffle = True,
    batch_size=32
)

from tensorflow.keras.applications.vgg16 import VGG16   
vgg_basemodel = VGG16(include_top=True)

from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping

early_stopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)

vgg_model = tf.keras.Sequential(vgg_basemodel.layers[:-1])
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))

# Freezing original layers
for layer in vgg_model.layers[:-1]:
    layer.trainable = False

vgg_model.compile(loss='categorical_crossentropy',
                  optimizer=tf.keras.optimizers.SGD(momentum=0.9, learning_rate=0.001, decay=0.01),
                  metrics=['accuracy'])

history = vgg_model.fit(train_generator,
              epochs=30,
              batch_size=64,
              validation_data=test_generator,
              callbacks=[early_stopping])

# finetuning with all layers set trainable

for layer in vgg_model.layers:
    layer.trainable = True

vgg_model.compile(loss='categorical_crossentropy',
                  optimizer=tf.keras.optimizers.SGD(momentum=0.9, lr=0.0001),
                  metrics=['accuracy'])

history2 = vgg_model.fit(train_generator,
              epochs=5,
              batch_size=64,
              validation_data=test_generator,
              callbacks=[early_stopping])

vgg_model.save('saved_models/vgg_finetuned_model')

First error: Invalid Argument Error

    InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-13-292bf57ef59f> in <module>()
     14               batch_size=64,
     15               validation_data=test_generator,
---> 16               callbacks=[early_stopping])
     17 
     18 # finetuning with all layers set trainable

    /usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     53     ctx.ensure_initialized()
     54     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55                                         inputs, attrs, num_outputs)
     56   except core._NotOkStatusException as e:
     57     if name is not None:

Second Error: Graph Execution Error

    InvalidArgumentError: Graph execution error:
Detected at node 'categorical_crossentropy/softmax_cross_entropy_with_logits' defined at (most recent call last):
    File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
      "__main__", mod_spec)
    File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
      exec(code, run_globals)
    File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
      app.launch_new_instance()
    File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
      app.start()
    File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
      self.io_loop.start()
    File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
      self.asyncio_loop.run_forever()
    File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
      self._run_once()
    File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
      handle._run()
    File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
      self._context.run(self._callback, *self._args)
    File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
      handler_func(fileobj, events)
    File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
      return fn(*args, **kwargs)
    File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events
      self._handle_recv()
    File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv
      self._run_callback(callback, msg)
    File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback
      callback(*args, **kwargs)
    File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
      return fn(*args, **kwargs)
    File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
      return self.dispatch_shell(stream, msg)
    File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
      handler(stream, idents, msg)
    File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
      user_expressions, allow_stdin)
    File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
      res = shell.run_cell(code, store_history=store_history, silent=silent)
    File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
      return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
    File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
      interactivity=interactivity, compiler=compiler, result=result)
    File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
      if self.run_code(code, result):
    File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
      exec(code_obj, self.user_global_ns, self.user_ns)
    File "<ipython-input-13-292bf57ef59f>", line 16, in <module>
      callbacks=[early_stopping])
    File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
      return fn(*args, **kwargs)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
      tmp_logs = self.train_function(iterator)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
      return step_function(self, iterator)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
      outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
      outputs = model.train_step(data)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
      loss = self.compute_loss(x, y, y_pred, sample_weight)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
      y, y_pred, sample_weight, regularization_losses=self.losses)
    File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
      loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
      losses = call_fn(y_true, y_pred)
    File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call
      return ag_fn(y_true, y_pred, **self._fn_kwargs)
    File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1790, in categorical_crossentropy
      y_true, y_pred, from_logits=from_logits, axis=axis)
    File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5099, in categorical_crossentropy
      labels=target, logits=output, axis=axis)
Node: 'categorical_crossentropy/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[32,10] labels_size=[32,128]
     [[]] [Op:__inference_train_function_11227]

I'm running this on google colaboratory. Is there a module that I should install? Or is it purely an error on the code itself?



ngModel does not work on modal page in Ionic 6

I am developing an Ionic/Angular app and wanted to use ngModel as alwayls. I am opening a modal from a modal and then want to use it like:

<ion-list>
<ion-item>
  <ion-label position="stacked">Name des Rezepts</ion-label>
  <ion-input [(ngModel)]="model" ></ion-input>
</ion-item>
</ion-list>
<ion-button (click)="save()">Save</ion-button>

i declared the variable in typescript like:

public model="";

But when i click save, my output IS ALWAYS the empty string?!

I imported ReactiveFormsModule in my app.module.ts and on normal pages it works, but in modals it seems like that it does not work?!

Do you guys have any ideas? Thank you!



How to start nodejs and filebeat in same contianer

I need to create a docker container with nodejs app and filebeat in same container. So filebeat will relay nodejs logs to my logstash server. I have created docker file and when i build the image it runs without error. But when i go inside container and see no files related to filebeat are created. NodeJS app runs as expected but filebeat is not working at all.

I have used the custom file given by filebeat https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html mentioned here

So can we run the nodeJS app and filebeat in same container? and if yes what i am doing wrong ?

Here is my docker file ---


# Test web app that returns the name of the host/pod/container servicing req
# Linux x64

FROM docker.elastic.co/beats/filebeat:7.13.4
COPY --chown=root:filebeat filebeat.yml /usr/share/filebeat/filebeat.yml

FROM node:current-alpine

LABEL org.opencontainers.image.title="Test node App" \
      org.opencontainers.image.description="Create logs for Opensearch" \
      org.opencontainers.image.authors="@user"

# Create directory in container image for app code
RUN mkdir -p /usr/src/app

# Copy app code (.) to /usr/src/app in container image
COPY . /usr/src/app

# Set working directory context
WORKDIR /usr/src/app

RUN mkdir -p /usr/src/app/logs

RUN touch /usr/src/app/logs/log.log

RUN touch /usr/src/app/logs/error_log.log

RUN ln -sf /proc/1/fd/1 /usr/src/app/logs/log.log

RUN ln -sf /proc/1/fd/1 /usr/src/app/logs/error_log.log

# Install dependencies from packages.json
RUN npm install

# Command for container to execute
CMD [ "node", "index.js" ]


2022-02-23

How to read contents of a bin file to a vector? [duplicate]

I'm a beginner with C++, so I don't know much about it. I have been searching questions for a while but nothing quite helps.

How to read the contents of a .bin file into a std::vector<uint8_t>?



How to perform join query using Java stream

I am fetching data from db and storing it in list. Then based on the the id, I want to create new Object. I have already implemented it using old for loop method. I am trying to implement it using java stream. Any help? Thanks.

        List<Employee> emp = new ArrayList<Employee>();
        emp.add(new Employee(1, "Ben", "Glasgow"));
        emp.add(new Employee(2, "Max", "Seattle"));
        emp.add(new Employee(3, "Sam", "Mumbai"));
        emp.add(new Employee(4, "John", "Aukland"));
        emp.add(new Employee(5, "Rob", "Tokyo"));

        List<Department> dpt = new ArrayList<Department>();
        dpt.add(new Department(1, 40000, "tech"));
        dpt.add(new Department(2,  30000, "mgm"));
        dpt.add(new Department(3,  50000, "tech"));
        dpt.add(new Department(4,  30000, "mgm"));


        List<EmpDep> empDep = new ArrayList<EmpDep>();
        
        /**How to do this using stream
        for (int i = 0; i < emp.size(); i++){
            empDep.add(new EmpDep(emp.get(i).getId(), emp.get(i).getName(), dpt.get(i).getSalary()));
        }
        */


text in axes pyqtgraph

how to add text in vertical axes in pyqtgraph(other than label for axes). when use TextItem, add text in ViewBox not in axesenter image description here



ZMQ detect client unavailability with heartbeat

I am experiencing with zmq, but I could not find an answer in the docs and examples that the proper use of the following socket options:

  • ZMQ_HEARTBEAT_IVL
  • ZMQ_HEARTBEAT_TIMEOUT
  • ZMQ_HEARTBEAT_TTL

The documentation states that they can be used with any socket types ("when using connection-oriented transports"). Is there a way to use them with REQ/REP socket pairs?

Test client code:

zmq::context_t context( 1 );
zmq::socket_t client( context, ZMQ_REQ );
client.connect( "tcp://localhost:5555" );

// Send request.
std::string request( "test message" );
zmq::message_t message( string.size() );
memcpy( message.data(), string.data(), string.size() );
socket.send( message, flags );

// Wait for reply.
socket.recv( &message );

// Do some work that takes cca 10 secs.
// ...

delete client;

Test server code:

zmq::context_t context( 1 );
zmq::socket_t server( context, zmq::socket_type::rep );

//server.set( zmq::sockopt::rcvtimeo, 5000 );
server.set( zmq::sockopt::heartbeat_ivl, 1000 );
server.set( zmq::sockopt::heartbeat_timeout, 3000 );
server.set( zmq::sockopt::heartbeat_ttl, 3000 );

server.bind( "tcp://*:5555" );

// Wait for request.
zmq::message_t message;
server.recv( &message );

// Send reply.
zmq::message_t message2("Test reply.", 11);
server.send( message);

// Wait for new request.
server.recv( &message );

// ...

My goal is to end the server side after the client is done. Socket option zmq::sockopt::rcvtimeo works well, it closes the connection on the server side. I thought that the heartbeat socket options would cause something similar, I mean close the server after the client has finished its job and not earlier. I tried them in all possible combinations on both server and client sides, but the server side always gets stuck in the second recv opretation. Checking the localhost network traffic with Wireshark tells that the ping-pong messages are delivered, but that's all.

So my questions are:

  • Can these socket options be used in a REQ/REP connection?
  • If yes, then how? Which side shall set these socket options?
  • If no, then what socket types are recommended?

Thanks in advance!



Saving Array ID to Database with value to each id

Good Evening.... Hope i can explain my problem correctly. I am getting data (ID) in array and value (numbers) in controller. Now i want to save the "numbers" in each "ID".

       array ID ["Buffalo-01", "Buffalo-02", "Buffalo-04"] 
       Numbers - 40.

Want to save 40 to each ID.

Controller

    public function addbuffalototalmilk(Request $req )
        {
            $buffalomilking         =   Buffalodata::where('avgmilk','<>','0')->Where('status','=','Available')->count(); // MIlking Animal Nos
            $getbuffalomilkingid    =   Buffalodata::where('avgmilk','<>','0')->Where('status','=','Available')->pluck('buffaloID'); // Get Buffalo Details of Milking

            $totalmorningmilk       =   $req->get('morningtotalmilk');
            $totaleveningmilk       =   $req->get('eveningtotalmilk');

            $eachmorningmilk        =   ($totalmorningmilk / $buffalomilking);
            $eacheveningmilk        =   ($totaleveningmilk / $buffalomilking);

        
            
            
            return response ();
        } 

Thanks in Advance



Request not being completed with Expo authSession

I'm quite new to app development and have ran into an error while trying to set up Google authentication, but it appears the request hasn't finished loading.

(Error message is: [Unhandled promise rejection: Error: Cannot prompt to authenticate until the request has finished loading.])

I'm not sure how I can work around this issue, perhaps some more experienced developers answer my question?

import React, { useState, useEffect } from 'react';
import { View, Text, Button, StyleSheet, TextInput } from 'react-native';
import { useAuthRequest } from 'expo-auth-session';
import * as WebBrowser from 'expo-web-browser';

WebBrowser.maybeCompleteAuthSession();

const App = () => {
    const [accessToken, setAccessToken] = useState();
    const [request, response, promptAsync] = useAuthRequest({
        iosClientId: "701989901250-95eku3luaf6qj1q0ba82dmeun3v4f486.apps.googleusercontent.com",
        expoClientId: "701989901250-ieiupvqri2hskpacksaoe7r6vjmu3e24.apps.googleusercontent.com",
        //androidClientId: "",
    });

    useEffect(() => {
        if (response?.type === "success") {
          setAccessToken(response.authentication.accessToken);
        }
      }, [response]);

    return (
        <View>
            <Button styles={styles.button} title="Sign-in with Google" /* google login button */
            onPress={() => { promptAsync({useProxy: false, showInRecents: true}) }}/>
        </View>
    );
}

EDIT: For those interested I did find a fix for this error, it was a weird issue, but all I changed was the third line and the tenth line.

// third
import * as Google from 'expo-auth-session/providers/google';
// tenth
const [request, response, promptAsync] = Google.useAuthRequest({


What is the knative's "mesh" gateway

I see that for every knative service, 2 VirtualService objects are created namely ksvc-ingress which has knative-serving/knative-ingress-gateway & knative-serving/knative-local-gateway gateways configured and ksvc-mesh which has mesh as the gateway.

I can see the knative-serving/* gateways using kubectl but I am unable to find the mesh gateway object in any namespace. I would like to understand if mesh here denotes some special object or is it an istio keyword representing something else?



2022-02-22

Spring Repositories returning Lists example

The Spring Data CrudRepository has various methods that return multiple instances of the entity managed by the repository. It does so by using Iterable and not List, as one might expect. In many cases, that is of no consequence, since you typically want to iterate over the result anyway. However, you might occasionally prefer a List. In these cases, Iterable is annoying.

I will write more about why that choice was made in the first place and how you can deal with it as long as you are on Spring Data 2.x. However, let me get the good news out first:

Repositories returning Lists

Spring Data 3.0.0-M2 now offers a ListCrudRepository, which returns a List where CrudRepository returns an Iterable.

Example 1. CrudRepository versus ListCrudRepository
@NoRepositoryBean
public interface CrudRepository<T, ID> extends Repository<T, ID> {

<S extends T> S save(S entity);

<S extends T> Iterable<S> saveAll(Iterable<S> entities);

Optional<T> findById(ID id);

boolean existsById(ID id);

Iterable<T> findAll();

Iterable<T> findAllById(Iterable<ID> ids);

long count();

void deleteById(ID id);

void delete(T entity);

void deleteAllById(Iterable<? extends ID> ids);

void deleteAll(Iterable<? extends T> entities);

void deleteAll();
}
@NoRepositoryBean
public interface ListCrudRepository<T, ID> extends CrudRepository<T, ID> {

<S extends T> List<S> saveAll(Iterable<S> entities);

List<T> findAll();

List<T> findAllById(Iterable<ID> ids);
}

Splitting the Sorting Repositories

The popular PagingAndSortingRepository used to extend from CrudRepository, but it no longer does. This lets you combine it with either CrudRepository or ListCrudRepository or a base interface of your own creation. This means you now have to explicitly extend from a CRUD fragment, even when you already extend from PagingAndSortingRepository.

Example 2. Paging and sorting repository — Version 2.x
public interface PersonRepository<Person, Long> extends PagingAndSortingRepository<Person, Long> {}COPY
Example 3. Paging and sorting repository — Version 3.x
public interface PersonRepository<Person, Long> extends PagingAndSortingRepository<Person, Long>, ListCrudRepository<Person, Long> {}COPY
There are also other interfaces that return Iterable<T> and that now got a companion interface that returns List<T>.

New fragment interface returning List

ListQuerydslPredicateExecutor
ListQueryByExampleExecutor

Sorting fragment Interface

ReactiveSortingRepository
Rx3JavaSortingRepository
CoroutineSortingRepository

CRUD repository it no longer extends

ReactiveCrudRepository
CoroutineCrudRepository
Rx3JavaCrudRepository

Spring Security: Configuring HttpSecurity

Spring Security 5.4 introduced the ability to configure HttpSecurity by creating a SecurityFilterChain bean.

Below is an example configuration using the WebSecurityConfigurerAdapter that secures all endpoints with HTTP Basic:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests((authz) -> authz
                .anyRequest().authenticated()
            )
            .httpBasic(withDefaults());
    }

}
Going forward, the recommended way of doing this is registering a SecurityFilterChain bean:

@Configuration
public class SecurityConfiguration {

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests((authz) -> authz
                .anyRequest().authenticated()
            )
            .httpBasic(withDefaults());
        return http.build();
    }

}

Spring Security: Configuring WebSecurity

Spring Security 5.4 also introduced the WebSecurityCustomizer.

The WebSecurityCustomizer is a callback interface that can be used to customize WebSecurity.

Below is an example configuration using the WebSecurityConfigurerAdapter that ignores requests that match /ignore1 or /ignore2:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    public void configure(WebSecurity web) {
        web.ignoring().antMatchers("/ignore1", "/ignore2");
    }

}
Going forward, the recommended way of doing this is registering a WebSecurityCustomizer bean:

@Configuration
public class SecurityConfiguration {

    @Bean
    public WebSecurityCustomizer webSecurityCustomizer() {
        return (web) -> web.ignoring().antMatchers("/ignore1", "/ignore2");
    }

}
WARNING: If you are configuring WebSecurity to ignore requests, consider using permitAll via HttpSecurity#authorizeHttpRequests instead. See the configure Javadoc for additional details.

Spring Security: LDAP Authentication exampe

Spring Security 5.7 introduced the EmbeddedLdapServerContextSourceFactoryBean, LdapBindAuthenticationManagerFactory and LdapPasswordComparisonAuthenticationManagerFactory which can be used to create an embedded LDAP Server and an AuthenticationManager that performs LDAP authentication.

Below is an example configuration using WebSecurityConfigurerAdapter the that creates an embedded LDAP server and an AuthenticationManager that performs LDAP authentication using bind authentication:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth
            .ldapAuthentication()
            .userDetailsContextMapper(new PersonContextMapper())
            .userDnPatterns("uid={0},ou=people")
            .contextSource()
            .port(0);
    }

}COPY
Going forward, the recommended way of doing this is using the new LDAP classes:

@Configuration
public class SecurityConfiguration {
    @Bean
    public EmbeddedLdapServerContextSourceFactoryBean contextSourceFactoryBean() {
        EmbeddedLdapServerContextSourceFactoryBean contextSourceFactoryBean =
            EmbeddedLdapServerContextSourceFactoryBean.fromEmbeddedLdapServer();
        contextSourceFactoryBean.setPort(0);
        return contextSourceFactoryBean;
    }

    @Bean
    AuthenticationManager ldapAuthenticationManager(
            BaseLdapPathContextSource contextSource) {
        LdapBindAuthenticationManagerFactory factory = 
            new LdapBindAuthenticationManagerFactory(contextSource);
        factory.setUserDnPatterns("uid={0},ou=people");
        factory.setUserDetailsContextMapper(new PersonContextMapper());
        return factory.createAuthenticationManager();
    }
}

Spring Security: JDBC Authentication example

Below is an example configuration using the WebSecurityConfigurerAdapter with an embedded DataSource that is initialized with the default schema and has a single user:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    @Bean
    public DataSource dataSource() {
        return new EmbeddedDatabaseBuilder()
            .setType(EmbeddedDatabaseType.H2)
            .build();
    }

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        auth.jdbcAuthentication()
            .withDefaultSchema()
            .dataSource(dataSource())
            .withUser(user);
    }
}
The recommended way of doing this is registering a JdbcUserDetailsManager bean:

@Configuration
public class SecurityConfiguration {
    @Bean
    public DataSource dataSource() {
        return new EmbeddedDatabaseBuilder()
            .setType(EmbeddedDatabaseType.H2)
            .addScript(JdbcDaoImpl.DEFAULT_USER_SCHEMA_DDL_LOCATION)
            .build();
    }

    @Bean
    public UserDetailsManager users(DataSource dataSource) {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        JdbcUserDetailsManager users = new JdbcUserDetailsManager(dataSource);
        users.createUser(user);
        return users;
    }
}
Note: In these examples, we use the method User.withDefaultPasswordEncoder() for readability. It is not intended for production and instead we recommend hashing your passwords externally. One way to do that is to use the Spring Boot CLI as described in the reference documentation.

Spring Security: In-Memory Authentication example

Below is an example configuration using the WebSecurityConfigurerAdapter that configures an in-memory user store with a single user:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        auth.inMemoryAuthentication()
            .withUser(user);
    }
}
The recommended way of doing this is registering an InMemoryUserDetailsManager bean:

@Configuration
public class SecurityConfiguration {
    @Bean
    public InMemoryUserDetailsManager userDetailsService() {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        return new InMemoryUserDetailsManager(user);
    }
}
Note: In these examples, we use the method User.withDefaultPasswordEncoder() for readability. It is not intended for production and instead we recommend hashing your passwords externally. One way to do that is to use the Spring Boot CLI as described in the reference documentation.

Spring Security : Global and Local AuthenticationManager example

Global AuthenticationManager

To create an AuthenticationManager that is available to the entire application you can simply register the AuthenticationManager as a @Bean.

@Configuration
public class SecurityConfiguration {
    @Bean
    public EmbeddedLdapServerContextSourceFactoryBean contextSourceFactoryBean() {
        EmbeddedLdapServerContextSourceFactoryBean contextSourceFactoryBean =
            EmbeddedLdapServerContextSourceFactoryBean.fromEmbeddedLdapServer();
        contextSourceFactoryBean.setPort(0);
        return contextSourceFactoryBean;
    }

    @Bean
    AuthenticationManager ldapAuthenticationManager(
            BaseLdapPathContextSource contextSource) {
        LdapBindAuthenticationManagerFactory factory = 
            new LdapBindAuthenticationManagerFactory(contextSource);
        factory.setUserDnPatterns("uid={0},ou=people");
        factory.setUserDetailsContextMapper(new PersonContextMapper());
        return factory.createAuthenticationManager();
    }
}

Local AuthenticationManager

In Spring Security 5.6 we introduced the method HttpSecurity#authenticationManager that overrides the default AuthenticationManager for a specific SecurityFilterChain.
Below is an example configuration that sets a custom AuthenticationManager as the default:

@Configuration
public class SecurityConfiguration {

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests((authz) -> authz
                .anyRequest().authenticated()
            )
            .httpBasic(withDefaults())
            .authenticationManager(new CustomAuthenticationManager());
    }

}

Spring Security without the WebSecurityConfigurerAdapter

Spring Security moving towards a component-based security configuration. In Spring Security 5.7.0-M2, WebSecurityConfigurerAdapter is deprecated.

To assist with the transition to this new style of configuration, we have compiled a list of common use-cases and the suggested alternatives going forward.

In the examples below we follow best practice by using the Spring Security lambda DSL and the method HttpSecurity#authorizeHttpRequests to define our authorization rules. If you are new to the lambda DSL you can read about it in this blog post. If you would like to learn more about why we choose to use HttpSecurity#authorizeHttpRequests you can check out the reference documentation.

Configuring HttpSecurity

In Spring Security 5.4 we introduced the ability to configure HttpSecurity by creating a SecurityFilterChain bean.

Below is an example configuration using the WebSecurityConfigurerAdapter that secures all endpoints with HTTP Basic:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests((authz) -> authz
                .anyRequest().authenticated()
            )
            .httpBasic(withDefaults());
    }

}
Going forward, the recommended way of doing this is registering a SecurityFilterChain bean:

@Configuration
public class SecurityConfiguration {

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests((authz) -> authz
                .anyRequest().authenticated()
            )
            .httpBasic(withDefaults());
        return http.build();
    }

}

Configuring WebSecurity

In Spring Security 5.4 we also introduced the WebSecurityCustomizer.

The WebSecurityCustomizer is a callback interface that can be used to customize WebSecurity.

Below is an example configuration using the WebSecurityConfigurerAdapter that ignores requests that match /ignore1 or /ignore2:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    public void configure(WebSecurity web) {
        web.ignoring().antMatchers("/ignore1", "/ignore2");
    }

}
Going forward, the recommended way of doing this is registering a WebSecurityCustomizer bean:

@Configuration
public class SecurityConfiguration {

    @Bean
    public WebSecurityCustomizer webSecurityCustomizer() {
        return (web) -> web.ignoring().antMatchers("/ignore1", "/ignore2");
    }

}
WARNING: If you are configuring WebSecurity to ignore requests, consider using permitAll via HttpSecurity#authorizeHttpRequests instead. See the configure Javadoc for additional details.

LDAP Authentication

In Spring Security 5.7 we introduced the EmbeddedLdapServerContextSourceFactoryBean, LdapBindAuthenticationManagerFactory and LdapPasswordComparisonAuthenticationManagerFactory which can be used to create an embedded LDAP Server and an AuthenticationManager that performs LDAP authentication.

Below is an example configuration using WebSecurityConfigurerAdapter the that creates an embedded LDAP server and an AuthenticationManager that performs LDAP authentication using bind authentication:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth
            .ldapAuthentication()
            .userDetailsContextMapper(new PersonContextMapper())
            .userDnPatterns("uid={0},ou=people")
            .contextSource()
            .port(0);
    }

}
Going forward, the recommended way of doing this is using the new LDAP classes:

@Configuration
public class SecurityConfiguration {
    @Bean
    public EmbeddedLdapServerContextSourceFactoryBean contextSourceFactoryBean() {
        EmbeddedLdapServerContextSourceFactoryBean contextSourceFactoryBean =
            EmbeddedLdapServerContextSourceFactoryBean.fromEmbeddedLdapServer();
        contextSourceFactoryBean.setPort(0);
        return contextSourceFactoryBean;
    }

    @Bean
    AuthenticationManager ldapAuthenticationManager(
            BaseLdapPathContextSource contextSource) {
        LdapBindAuthenticationManagerFactory factory = 
            new LdapBindAuthenticationManagerFactory(contextSource);
        factory.setUserDnPatterns("uid={0},ou=people");
        factory.setUserDetailsContextMapper(new PersonContextMapper());
        return factory.createAuthenticationManager();
    }
}

JDBC Authentication

Below is an example configuration using the WebSecurityConfigurerAdapter with an embedded DataSource that is initialized with the default schema and has a single user:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    @Bean
    public DataSource dataSource() {
        return new EmbeddedDatabaseBuilder()
            .setType(EmbeddedDatabaseType.H2)
            .build();
    }

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        auth.jdbcAuthentication()
            .withDefaultSchema()
            .dataSource(dataSource())
            .withUser(user);
    }
}
The recommended way of doing this is registering a JdbcUserDetailsManager bean:

@Configuration
public class SecurityConfiguration {
    @Bean
    public DataSource dataSource() {
        return new EmbeddedDatabaseBuilder()
            .setType(EmbeddedDatabaseType.H2)
            .addScript(JdbcDaoImpl.DEFAULT_USER_SCHEMA_DDL_LOCATION)
            .build();
    }

    @Bean
    public UserDetailsManager users(DataSource dataSource) {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        JdbcUserDetailsManager users = new JdbcUserDetailsManager(dataSource);
        users.createUser(user);
        return users;
    }
}
Note: In these examples, we use the method User.withDefaultPasswordEncoder() for readability. It is not intended for production and instead we recommend hashing your passwords externally. One way to do that is to use the Spring Boot CLI as described in the reference documentation.

In-Memory Authentication

Below is an example configuration using the WebSecurityConfigurerAdapter that configures an in-memory user store with a single user:

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        auth.inMemoryAuthentication()
            .withUser(user);
    }
}
The recommended way of doing this is registering an InMemoryUserDetailsManager bean:

@Configuration
public class SecurityConfiguration {
    @Bean
    public InMemoryUserDetailsManager userDetailsService() {
        UserDetails user = User.withDefaultPasswordEncoder()
            .username("user")
            .password("password")
            .roles("USER")
            .build();
        return new InMemoryUserDetailsManager(user);
    }
}
Note: In these examples, we use the method User.withDefaultPasswordEncoder() for readability. It is not intended for production and instead we recommend hashing your passwords externally. One way to do that is to use the Spring Boot CLI as described in the reference documentation.

Global AuthenticationManager

To create an AuthenticationManager that is available to the entire application you can simply register the AuthenticationManager as a @Bean.

This type of configuration is shown above in the LDAP Authentication example.

Local AuthenticationManager

In Spring Security 5.6 we introduced the method HttpSecurity#authenticationManager that overrides the default AuthenticationManager for a specific SecurityFilterChain.
Below is an example configuration that sets a custom AuthenticationManager as the default:

@Configuration
public class SecurityConfiguration {

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests((authz) -> authz
                .anyRequest().authenticated()
            )
            .httpBasic(withDefaults())
            .authenticationManager(new CustomAuthenticationManager());
    }

}